id stringlengths 3 9 | source stringclasses 1 value | version stringclasses 1 value | text stringlengths 1.54k 298k | added stringdate 1993-11-25 05:05:38 2024-09-20 15:30:25 | created stringdate 1-01-01 00:00:00 2024-07-31 00:00:00 | metadata dict |
|---|---|---|---|---|---|---|
265733067 | pes2o/s2orc | v3-fos-license | Design for Robustness: Bio-Inspired Perspectives in Structural Engineering
Bio-inspired solutions are widely adopted in different engineering disciplines. However, in structural engineering, these solutions are mainly limited to bio-inspired forms, shapes, and materials. Nature is almost completely neglected as a source of structural design philosophy. This study lists and discusses several bio-inspired solutions classified into two main classes, i.e., compartmentalization and complexity, for structural robustness design. Different examples are provided and mechanisms are categorized and discussed in detail. Some provided ideas are already used in the current structural engineering research and practice, usually without focus on their bio-analogy. These solutions are revisited and scrutinized from a bio-inspired point of view, and new aspects and possible improvements are suggested. Moreover, novel bio-inspired concepts including delayed compartmentalization, active compartmentalization, compartmentalization in intact parts, and structural complexity are also propounded for structural design under extreme loading conditions.
Introduction
Biomimetics is a growing design approach in various fields of engineering.It consists of emulating, i.e., being inspired by principles, structures, or other solutions found in nature.Basically, it can be argued that the never-ending evolution that takes place in nature leads to alternative but equally effective solutions to many problems.To export solutions from nature to engineering, problem-based or solution-based approaches are possible.The former refers to a top-down approach to search for a bio-inspired solution for a particular engineering problem.The latter, on the contrary, is bottom-up, that is taking inspiration from nature for a new engineering design [1].
Bio-inspired solutions have become very popular nowadays and have been widely used in the structural engineering realm.Three scales of bio-inspiration can be formulated in tackling a problem: (i) the organism level, when a specific organism is imitated; (ii) the behavior level, when the behavior of the organism in a larger context is mimicked; (iii) the ecosystem level, if the whole context serves for bio-inspiration [2].The majority of current research works in the structural engineering realm are focused on form, shape, and material, i.e., the first level.For example, an overview of bio-inspired vibration isolation systems is reported in [3].Comprehensive reviews devoted to different aspects of bio-inspired forms and materials including dynamic behavior, energy absorption, and advanced material can be found in [4][5][6], whereas some studies, generally not in the structural engineering field, are devoted to bio-inspired algorithms, an exclusive focus on nature as a source of structural design, i.e., design philosophy, is seldom reported.
With the increase in the popularity of bio-inspired solutions, different related but not completely alike terms have emerged: bionics, biomimetics, biomimicry, etc.These concepts are more or less overlapping and the borders are not always very clear.However, they focus on different aspects and levels of bio-inspired solutions with different origins.
In this study, such differences are ignored, and the terms bio-inspired science or bio-inspired solutions are used as an umbrella term to cover different aspects and levels.
In recent decades, bio-inspired science has become a well-established research area with many applications in different scientific fields, ranging from engineering to the social sciences.However, as mentioned, most of these studies in the structural engineering realm can be categorized into two main classes.The first one relates to the form, shape, and structural configuration adapted, directly or indirectly, from nature.The second one is related to bio-inspired material.There are inherent similarities between these two classes.Actually, the same bio-inspired concepts are applied at different levels; either on the material or structural levels.Anyway, the bio-inspired solutions that are directly used for design, i.e., design philosophy, are really scarce in the structural engineering discipline.
Progressive collapse and structural robustness are among the relevant topics in structural engineering [7][8][9][10].Different methods for the design against extreme events to ensure structural robustness are discussed by Starossek [11].Among them, the alternate load path (ALP) method, which consists of providing alternative solutions for transferring the forces from the elevation to the foundation after an initial failure, is well-accepted and widely used both in research and practice.Whereas the ALP method is the main codebased method to ensure structural robustness, the compartmentalization approach [11] (i.e., creating deliberate discontinuities in the structural scheme to control the propagation of damage), is also adopted in both research and practice, especially for long-span structures, namely bridges.The process of conceiving methods for ensuring enough robustness to a structure and preventing local damage from progressing into the collapse of the whole building followed, so far, heuristic approaches mainly based on the observation of previous accidents.The analogy between current robustness approaches and nature's solution has not been highlighted, so far.
In addition to the classic approaches, new horizons are also emerging.Among them, the digital twin concept, i.e., a digital representation of a system that updates from real-time data, is striking [12,13].This growth is largely driven by advances in related concepts and technologies, namely artificial intelligence, machine learning, the Internet of things, cloud computing, big data, multi-physical simulation, robotics, 5G, real-time sensors, and quantum computing.Such advances enable the concept to become reality, i.e., allow the dynamic and live monitoring of structures and facilitate interactive structural response based on the acting threat on the system.Indeed, this philosophy, i.e., justified response based on the acting threat, is nature's outstanding way of protection, defense, and survival.The new technology advancements provide pristine opportunities for bio-inspiration in practice, which was entirely impossible two decades ago.
However, although no direct explicit bio-inspired approach in the development of the current methods for preventing the collapse is traceable, the analogy is insightful.As previously highlighted, the focus on bio-inspired solutions can help in understanding the currently adopted robustness approaches and promote novel solutions for structural design.The structural design concepts that are discussed in this paper, i.e., complexity and compartmentalization, even without focusing on the bio-inspired aspects, are novel ideas.In other words, for the subject of study, even at a pure structural engineering level, we are still struggling with the ideas and concepts, a well-developed and well-accepted framework is not available, and the methodology is not standardized for the design phase, neither for robustness in general nor for complexity and compartmentalization.However, the suggested solutions can be considered as the basis for future advances to develop practical frameworks for next-generation bio-inspired structural design under extreme loading conditions.This study focuses on the bio-inspired approaches that can be used for the robustness design of engineering structures, especially civil infrastructures.In this regard, robustness approaches, i.e, compartmentalization and complexity, are discussed in depth and several examples are included.The analogy between nature's solutions and current engineering solutions is highlighted and under the light of this new insight, current approaches are enriched and novel possible solutions are suggested.
Design for Robustness
The term robustness is encountered in different scientific disciplines, from engineering to biology.In structural engineering, although a unique definition does not exist [14], the term usually refers to the "insensitivity to the local failure" [11].In the current design philosophy, the robust structure is thought to be self-sufficient enough to correctly behave to damage from the end of its building to the maintenance and, thus, the structural "organism" is fixed.Although issues on the origin of the damage are still open, structural robustness is not to be confused with resilience, which is related to the use of the structure, e.g., the promptness to resume the activities performed in the building after the damage [9].Nonetheless, robustness is a major component of disaster resilience [15].In other words, in most cases, a resilient structure is also robust and it is very unlikely to see a resilient but non-robust system, especially in civil structures.In other disciplines, namely biology, robustness is used to refer to similar concepts [16].However, the distinctions and overlapping aspects of the robustness and related concepts are not always very clear, since, in biology, the concept is related to canalization, redundancy, stability, and adaptability [17,18].
In the current study, which is devoted to bio-inspired solutions being implemented in structural systems, namely civil infrastructures, robustness is defined as the capacity of the system not to be damaged in a way disproportionate to the initial failure.One must bear in mind that for any possible robustness implementation, precise metrics are needed.To this aim, in structural engineering, several, but not unified, solutions at both local and global levels have been proposed [19][20][21][22].However, there is still room for improvement and much more effort is needed to develop a general framework (that can be used for different structural systems under different initial local failure regimes) for quantification of structural robustness.
Several methods to ensure structural robustness have a twin in biology.Although other solutions involving strengthening of the structure exist, the present paper emphasizes two alternative approaches for ensuring structural robustness: the compartmentalization and the possibility of rerouting the loads across the structure thanks to the network connection between the elements, i.e., complexity.Such approaches, partially implemented in the structural design as detailed in the specific subsections, are extensively adopted in nature to tackle unexpected and extreme events.That is why in this study the emphasis is put on these strategies with the aim of improving their efficacy within the framework of a bio-inspired robustness-oriented design.
Compartmentalization
Compartmentalization is a design philosophy [9] and has been successfully used in real constructions [11].Basically, it consists of creating artificial discontinuities (by providing physical discontinuity or changes in the structural property like stiffness and energy dissipation capacity, that lead to discontinuity in the structural behavior under extreme loading conditions) in the structural scheme to avoid the propagation of damage and limit its extent after the occurrence of local failure.The concept of structural compartmentalization has been historically well-known to the builders, and also has been suggested for structures under extreme loading conditions [23].However, the modern use of this idea for the robustness design of civil structures is related to the several works by Starossek that finally were integrated in [11].In this study, segmentation is considered a special case of compartmentalization in which the system is physically separated.Obviously, compartmentalization is a more general term that can also be applied to functional and active compartmentalized systems.Compartmentalization is widely used in nature to let a species survive.Sacrifice-for-survival mechanisms can be observed in many living organisms, from the organelle to the organ system, and from the organism to the ecosystem.Among them, autotomy in both plants and animals and hypersensitive response (HR) in plants can be highlighted.At the cell level, programmed cell death (PCD) mechanisms, namely apoptosis, are noteworthy.A well-known example of natural structural compartmentalization can be observed in some plants' seed pods that are physically segmented (see Figure 1a).A survey with a special focus on biomimetics is reported in [24].In this case, compartmentalization limits the possible damage to one or a few segments (initial local failure area) and prevents the total destruction of the pod, especially when it is immature.Moreover, compartmentalized seed pods guarantee the uniform distribution of seeds in all directions at release time.This type of physical segmentation can be compared with the construction joints in engineering structures.Such segmentation can save a structure during extreme events such as those observed in the 11 September 2001 attacks on the Pentagon building (see Figure 1b), where expansion joints limited the damage mainly in one segment and prevented the total collapse of the structure [11].Autotomy can be observed in both animals and plants with different levels and mechanisms.With the autonomy, an organism scarifies a body part as a self-defense mechanism to avoid an external threat and subsequently possible death (total failure of the system).One of the best known examples of autotomy is that of the gecko's tail (see Figure 2a).In this case, the animal employs autotomy to distract predators, but herein, the underlying concept of scarifying the member/part to save the system is noteworthy.However, this mechanism is also observed in other animals, e.g., legs in spiders [27], tails in reptiles [28], arms in brittlestars [29] and even in mammals, as skin in mice [30].An interesting case from a structural point of view is related to African wood sorrel, in which, when the leaves and flowers of this plant are pulled (i.e., tensile stress) they break easily at their base, leaving the rest of the plant intact [31].That is in contrast with so-called "strength strategy" adopted, e.g., by woody plants in which the failure occurs in soil-root system [32].There are two different autotomy mechanisms: first, i.e., true autotomy, in which the animal throws off a part of the body when sufficiently stressed by a threat, but not necessarily with the involvement of mechanical forces.The purpose of such behavior can be either to distract the predators/threat or to release the stress/pain.For example, lizards can contract a muscle to fracture a vertebra [35] under a specific condition (biomimicking interfacial fracture behavior of lizard tail autotomy is discussed in [36]).When spiders are injected in the leg with bee or wasp venom, they can shed this appendage [37] based on the pain level.In the second type, i.e., false autotomy, observed in both animals and plants, the autotomy occurs under direct mechanical stress in a predefined zone as discussed for African wood sorrel [31].
Compartmentalization techniques in the current modern engineering structures are similar to the latter in which controlled failures occur in predefined positions in the structural scheme, namely construction joints, deliberately weak zones, specially designed reinforcement bar configurations and fuse-type elements (see Figure 2b and [10,11]).Although not reported in the literature, theoretically, it is possible to use the "true autotomy" concept in future smart structures.Actually, recent advances in structural engineering, namely digital twin [12,13], can facilitate the application of this concept.Adjustable structural response (based on the acting load/threat) is seldom reported in structural engineering.For example, magnetorheological dampers to mitigate train-induced [38] and rain-/wind-induced [39] vibrations in bridges are noteworthy.The aforesaid example of the spider injected with wasp venom [37] can be revisited here.The structural changes in these methods (e.g., adjusting the stiffness in specific points and directions that can modify the dynamic property of the system) are far less than what is actually needed for active compartmentalization, in which almost complete segmentation is required.However, tracing recent advances in both monitoring science and construction techniques guarantees that the "true autotomy" concept can be used in future modern structures.Nature also provides more interesting ideas, for example, a delayed response in Verbascum sinuatum (wavyleaf mullein) is also inspiring [31].The idea is useful, for example, for allowing evacuation before the controlled partial collapse in the compartmentalization strategy.
Hypersensitive response (HR) in plants is another situation in which the compartmentalization concept is used in a living organism.HR is characterized by the rapid death of cells in the local region surrounding a threat (usually pathogens) to prevent the spread of the problem to other intact parts of the plant (see Figure 3a).Compartmentalization of decay (damage) in trees (CODIT) is also noteworthy here [40,41].When a tree is wounded under a specific threat, the damaged region does not usually heal or replace, in contrast to what usually occurs in animals.Alternatively, trees isolate the damaged parts by producing new tissue around the damaged region, creating a protective boundary and isolating the damaged tissue due to decay or infection (see Figure 3b).The concept can be adopted for the compartmentalization of affected areas in corrosion, aging, and chemical attack in concrete and steel.Heretofore, the studies are usually limited to non-structural levels.
There are several other situations where sacrifice-for-survival mechanisms act to save the organisms.An example of such a mechanism is reported in a root stem cell niche subjected to chilling stress [42].Programmed cell death [43], namely apoptosis, shows interesting and useful characteristics.In vertebrates, necroptosis [44] as a PCD mechanism can also be considered, where cell suicide in a programmed fashion aids in defense against pathogens.Two mechanisms can be observed in apoptosis: the "intrinsic pathway", in which the cell kills itself because it senses stress, and the "extrinsic pathway", in which the cell kills itself because of signals from other cells [45].In currently engineered compartmentalization, the compartmentalized region is usually within a damaged area or in its vicinity.On the other hand, inspired by the PCD, cases can be defined where compartmentalized regions are activated based on the damage progress and threat situation.With ongoing advances in digital twin and related concepts, there are reasons to be optimistic that a real-time digital replica of the system will soon be possible (actually such techniques with some limitations are already used in special structures [12,13]).Such a revolutionary concept, plus the burgeoning applications of artificial intelligence and machine learning, allows the prediction of structural response and determines critical scenarios faster than acting threats.Thenceforward, the most suitable region (from global structural integrity, economical loss, or a life-saving point of view) can be compartmentalized.Future smart structures, hypothetically, can monitor the threat progress (say for example fire) and determine the damage level (which members and to what extend are affected and will be affected) to predict and decide about compartmentalization schemes, that can be far from the direct damage region and even in the intact parts of the system to increase efficiency and decrease overall loss.Despite compartmentalization effectiveness, it is very costly and it can be considered only after other defense measures have failed.In nature, compartmentalization mechanisms are usually the last line of defense.A similar concept is already used in structural engineering to avoid progressive collapse, in which compartmentalization is only considered for very large initial failure or when the existence of ALPs cannot be guaranteed [10].
In general, natural compartmentalization phenomena are either active or passive.In the active form, namely true autotomy in the spider leg, the system decides about the segmentation necessity and appropriate time based on the threat level, therefore, compartmentalization can happen even before any physical damage.On the other hand, in false autotomy, for example for African wood sorrel, compartmentalization is achieved in a predefined weak region and then activated under specific mechanical stresses.Alternatively, compartmentalization can be categorized either as structural or non-structural, whereas in the former, some mechanical properties (special configuration in geometry and/or material) allow the compartmentalization at extreme conditions; in the latter, namely HR and CODIT, this is achieved using the changes.Figure 4
Complexity
Complexity is the characteristic of systems in which specific properties are the result of the mutual participation of the elements of the system and, thus, the whole is not the mere sum of its components.There is no unique and well-accepted definition of complexity and the published definitions usually depend on the topic and type of the system in which the term is applied.Complexity characterizes the behavior of a system whose components interact in multiple ways and follow local rules, meaning there is no reasonable higher instruction to define the various possible interactions [48].Similarly, it can be stated that a system's behavior is not the simple sum of the behavior of its components [49].Weaver drew a distinction between "disorganized complexity" and "organized complexity" based on the number of the parts and interactions between them [50].
The emergence of behaviors from the arrangement of elements, each of which acts in a separate way, is typical of connected systems [51].In the civil engineering realm, a complex structure is one that cannot be reduced to a simple scheme without losing important aspects of the structural behavior [49].Complexity is not a well-documented approach for increasing structural robustness and the research works on this topic are mainly limited to the handful of papers [49,52,53], whereas a limited number of studies have focused on the quantification of the structural complexity [49,52,53].To date, no uniform framework for distinguishing between complex and non-complex structure exists.
Complexity is among the main approaches that can be found in live organisms to ensure system robustness.The concept of complexity is similar, but not equal, to redundancy and ALP.Complexity is more a matter of interaction between different systems and sub-systems, which mutually influence each other when responding to an input.However, in certain structural systems, say precast reinforced concrete structures, redundancy can be considered among the few possible strategies for providing robustness to the system [54].In the concept of ALP, the system shows different responses to different initial failure scenarios, but, a complex system is insensitive to the initial failure (regardless of the size and location of the initial failure), and therefore, is a robust system (an example of a natural complex system, i.e., mouse brain's vascular network, is shown in Figure 5).In another word, as argued by Kitano [55], robustness is a fundamental feature of evolvable complex systems.As another example, simple bacteria with several hundred genes require carefully controlled environments, whereas others, with ten times the number of genes, can survive when subjected to extreme conditions [56].Complex systems are usually redundant.Redundancy is a universal property of the nervous systems from lobster stomatogastric ganglion [58] to the human brain [59].Three sub-concepts of redundancy in the nervous system including sloppiness, compensation, and multiple solutions are suggested and discussed in [60].Considering that no effective classification of structural complexity is suggested so far, such a functional categorizing is inspiring.At the ecological level, functional equivalence can be considered in which multiple species can share similar, or even identical, roles in an ecosystem [61].Several examples of this type of redundancy in the complex systems, from plant-pollinator relationships [62] to plant-animal seed dispersal mechanisms [63] can be mentioned.In structural engineering, heretofore, no classification referring to the involved mechanisms is suggested.However, inspired by nature, namely from functional equivalence and biodiversity, the involving mechanisms in a complex system can be classified based on form and function.
Analyzing biology and complexity, Carlson and Doyle [56] highlighted that the highly optimized tolerance (HOT) conceptual framework well describes the ability of microorganisms to be extremely robust.This ability is the result of millions of years of evolution that created a biological system that is well-structured, heterogeneous, and self-dissimilar (i.e., different patterns are observed at different scales).This allows the system to adapt to large events, with an intrinsic robustness and the ability to respond differently, but with an inherent fragility to local failures, the so-called "robust, yet fragile" consideration.The HOT framework contrasts with the self-organized criticality (SOC) model which argues that living organisms show a robust behavior by changing from one steady state to the other, and not by maintaining a given state [16].To provide such a property, the internal configuration of the system should be generic and self-similar.Although the two models try to describe biological complexity and robustness, suggestions for engineered systems can be drawn.One of the key points that differentiate between living and engineered entities is the possibility for the former to evolve over time.Usually, artificial systems, say civil structures, are designed and built not to change.The possibility to adapt, as inspired by nature, is a key point in the design of bio-inspired robust structures.
As mentioned, the ALP method is the main design approach in current research and practice in structural engineering.However, in the current study, ALP is discussed as a subset of complexity.This reflects the fact that complexity approaches can be found in living organisms that ensure system robustness.ALP can be considered as an engineered equivalent of complexity in human-made systems.In the ALP approach, the ability of the structural system after an initial local failure, namely a member loss, is examined, whereas there is no evidence that this approach was developed by bio-inspiration, a similar concept adopted in natural systems for millions of years.A clear example of the ALP concept can be observed in collateral circulation (see Figure 6).Collateral circulation is the alternate circulation around a blocked artery or vein via alternative paths.These alternate paths can be the existing vessels or newly developed ones.Several examples of both situations can be observed in different parts of the human body, namely the brain, heart, and kidneys [64].A complete analogy can be observed in engineering structures after initial failure in which alternate load paths activate to prevent the total collapse of the structural system.Following biological insights, two possible trends emerge.On one side, there are structural designs that foster some specific types of damage tolerance and try to uniform the response of the system to the threat: this is the case, for example, of those schemes in which the structural complexity with a defined loading scheme is maximized [66].As an example, Figure 7 illustrates a frame structure subjected to vertical and lateral loads (equal magnitude) on nodes.The size of the elements ensures maximization of the normalized structural complexity index (NSCI) of the system; hence, the effects of a local element removal are similar wherever the location of the damage [67].Nevertheless, it is theoretically possible to fully implement the natural strategies for ALP on future smart structures, i.e., structures that are able to modify their stiffness, connections, and constraints depending on the load types and intensity acting on them.Besides, such schemes are prone to failure if the loading changes.On the other side, there are structures that are designed to resist specific threats, only, e.g., column removal at the bottom level, for all the possible combinations of live and dead loads.As highlighted in biological robustness studies, there is a balance between robustness, fragility, performance, and resource demand that rules the shape of the systems [55].
ALP strategy can also be implemented by adopting ultra-specialized materials.A clear example from nature comes from the analysis of the local failure of spider webs.Spider webs are masterpieces of natural structural engineering [68]; millions of years of evolution shaped them in order to achieve a desired optimized functionality, i.e., the capture of prey using a minimum amount of silk [69].Deeply analyzing a spider web, its structural integrity is guaranteed by the stiff behavior of silk under small deformation before the yield point.As proven, the web structural performance is dominated by the properties of the stiffer and stronger radial dragline silk, suggesting that the spiral threads play non-structural roles, i.e., capturing prey [70].
Cranford et al. [71] simulated the response of spider webs made of different types of fibers with completely different mechanical behaviors (Figure 8).Model (a) refers to the stress-strain behavior of the dragline silk from the species Nephila clavipes.Four distinct regimes characterize the behavior: an initial linear part governed by stretching, an unfolding of the protein domain resulting in a softening that continues with a stiffening regime, and, finally, a stick-slip deformation.Models (b) and (c) refer to idealized engineered materials with linear elastic and elastic-plastic behavior, respectively.The initial damage is represented by the cut of a radial element.It results that any change in deformation behavior and web damage would be a direct result of differences in the stress-strain behavior of the fibers.In the case of a web composed of natural dragline silk, all radial threads partially contribute to loading resistance.The fact that the material suddenly softens at the yield point ensures that the transfer is limited to the loaded radial thread, which begins to stiffen.In the linear elastic model, the loaded radial threads are subjected to the majority of the load.In this way, the adjacent radial threads bear a higher fraction of the ultimate load, which results in a greater delocalization of damage after the failure.With the elastic-plastic material, the perfectly-plastic behavior of the radial element enhances load distribution throughout the structure and it greatly increases the damage zone.
Discussion and Conclusions
There is an increase in bio-inspired research and its application in civil and structural engineering.However, as reviewed, the available literature is mainly devoted to bio-inspired form, shape, and material, not to structural design philosophy.This paper tries to put forth some novel bio-inspired design strategies to ensure the robustness of civil structures and infrastructures.In this regard, nature's solutions are comprehensively reviewed and various examples are provided.Table 1 summarizes nature's solutions and their possible engineering equivalents for robustness design.Among the proposed solutions, the emphasis is put on two alternative but complementary solutions, i.e., complexity and compartmentalizing.Different aspects of these novel concepts are scrutinized and several suggestions are proposed for future smart structures.It should be noted that the application of the suggested new concepts will be facilitated with more advances in other fields, namely, construction science, robotics, real-time sensor network, and digital twin.In the both mentioned classes of structural robustness strategies, i.e., complexity and compartmentalization, modifications of the stiffness and connectivity between members/parts is required.In the former, the stiffness of the member at the local level and the connections between the members at a global level (that indicate the global stiffness and strength of the system) can be manipulated to maximize the complexity of the system at different loading regimes (or acting threat, i.e., various initial local failures).For the latter (compartmentalization), the possibility of active discontinuity at different levels is favorable.
For the materialization of such adjustability and adaptability, two important steps still need to be taken.(i) The live monitoring and analysis of the structure, which allow the live assessment of structural response, and determine the best possible "changing scenario", in terms of removal, discontinuity, and/or adjustment of the stiffness and energy dissipation capacity, are necessary.To this aim, some fundamental progress, namely digital twin, is already achieved.However, more advances are still required.In addition, (ii) the next-generation smart structures should be able to change their stiffness and connectivity.This ability is very limited in the existing modern structures, but is absolutely necessary to realize the concepts suggested in the paper.Recent advances in construction science and robotics serve this aim.It is anticipated that such a technique will first appear in space and military applications and then will spread to critical infrastructures.Thitherto, we need to develop ideas and test the concepts, and this study is dedicated to that purpose.
Two main classes for bio-inspired robustness design of civil structures and infrastructures, namely compartmentalization and complexity, are observed and discussed.Analogy with the current design approaches is demonstrated and possible improvements are highlighted.For compartmentalization, new bio-inspired concepts, namely (i) delayed compartmentalization, (ii) active compartmentalization, and (iii) compartmentalization in intact part are suggested.Structural complexity, as a bio-inspired robustness technique, is suggested and discussed.Recent progress in monitoring techniques and burgeoning construction advances will enable structural scientists to mimic nature more closely, and the future modern and smart structures can be "live" from material level to global system level.
Figure 2 .
Figure 2. Compartmentalization concept; (a) nature: a survived white-headed dwarf gecko with tail lost due to autotomy [33] (taken by Muhammad Mahdi Karim, with no modification under GNU Free Documentation License, Version 1.2) and (b) engineering: Confederation Bridge in which a limited collapse is accepted to ensure the structural robustness [34] (with no modification under Creative Commons Attribution 2.0 Generic license (https://creativecommons.org/licenses/by/2.0/,accessed on 2 February 2023)).
Figure 4 .
Figure 4. Classification of different natural compartmentalization phenomena.HR, CODIT, and PCD stand for hypersensitive response, Compartmentalization of decay (damage) in trees, and programmed cell death, respectively.
Figure 6 .
Figure 6.Schematic drawing of the coronary artery circulation (a) without and (b) with collateral circulation, based on the concept reported in [65].
Figure 7 .
Figure 7. Sketch of a structure subjected to vertical and lateral loads on nodes.The size of the elements results from the maximization of the normalized structural complexity index [67].
Figure 8 .
Figure 8. Different behaviors depending on material properties of spider web from top to bottom; (a) the real material properties, (b) linear elastic behavior, and (c) elastic-perfectly plastic behavior.The damages after similar removal are plotted, as reported in [71], reprinted with permission.
Table 1 .
Natural solutions and their analogy in structural engineering for robustness. | 2023-03-01T16:01:26.962Z | 2023-02-26T00:00:00.000 | {
"year": 2023,
"sha1": "1c376be4d76230b93199ca4bcb91024036ca06d3",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2313-7673/8/1/95/pdf?version=1677649016",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "55fb34dd90c1ffc75204f0c16b4bccf5a300c24e",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
} |
195757346 | pes2o/s2orc | v3-fos-license | Predictive prosthetic socket design: part 1—population-based evaluation of transtibial prosthetic sockets by FEA-driven surrogate modelling
It has been proposed that finite element analysis can complement clinical decision making for the appropriate design and manufacture of prosthetic sockets for amputees. However, clinical translation has not been achieved, in part due to lengthy solver times and the complexity involved in model development. In this study, a parametric model was created, informed by variation in (i) population-driven residuum shape morphology, (ii) soft tissue compliance and (iii) prosthetic socket design. A Kriging surrogate model was fitted to the response of the analyses across the design space enabling prediction for new residual limb morphologies and socket designs. It was predicted that morphological variability and prosthetic socket design had a substantial effect on socket-limb interfacial pressure and shear conditions as well as sub-dermal soft tissue strains. These relationships were investigated with a higher resolution of anatomical, surgical and design variability than previously reported, with a reduction in computational expense of six orders of magnitude. This enabled real-time predictions (1.6 ms) with error vs the analytical solutions of < 4 kPa in pressure at residuum tip, and < 3% in soft tissue strain. As such, this framework represents a substantial step towards implementation of finite element analysis in the prosthetics clinic.
Introduction
The prosthetic socket provides the critical attachment between the residual limb following amputation and the prosthetic device. Each socket is bespoke to the user and is designed in a manual and iterative process by a prosthetist. This process is dependent on their skill and experience, as well as patient feedback (Paterno et al. 2018) with no quantitative prediction of fit prior to the manufacture of the socket. As a result, on average nine fitting and adjustment sessions are required in the first year following amputation (Pezzin et al. 2004). Inadequate socket fit leads to pain and potentially device rejection, restricting activities of daily living (Hsu and Cohen 2013). To ensure a good socket fit, clinicians perform a series of geometrical modifications to the captured shape of the individual's residual limb, known as rectification, targeting optimal load transfer. Traditionally, this involved physical modification of a positive plaster mould. However, digital technologies are becoming more prevalent within the clinical community (Whiteside et al. 2007;Karakoç et al. 2017). Commonly, this approach involves using a surface scanner to digitise the limb's surface shape, performing the patient-specific rectifications in a CAD environment and manufacturing a mould to form the socket within a central fabrication facility (Saunders et al. 1985;Oberg et al. 1989;Sanders et al. 2007).
The residual limb after lower limb amputation is created by forming a soft tissue pad over the resected bone (Smith and Fergason 1999). The complex device/patient geometry together with the significant differences in biological and prosthetic material properties creates a challenging environment for appropriate load transfer. The skin at the interface is subject to high pressure and shear gradients, which frequently lead to discomfort (Lyon et al. 2000) and potentially the formation of chronic wounds, termed pressure ulcers or stump ulcers (Yusuf et al. 2015). This effect is exacerbated by elevated temperature and humidity (Hachisuka et al. 2001;Kottner et al. 2018) which lower the skin's tolerance to load, in addition to diurnal fluctuation in residual limb volume . Further, sustained sub-dermal soft tissue strains can lead to deep tissue injury (DTI) (Portnoy et al. 2009a;Loerakker et al. 2011;Oomens et al. 2015), which may require further amputation surgery (Highsmith et al. 2016).
There has been considerable research into using biomechanical metrics as surrogates for the goodness-of-fit of the prosthetic socket, in particular interface pressure and shear. This has either been measured with interface sensors (Goh et al. 2004;Dou et al. 2006;Dumbleton et al. 2009;Tang et al. 2017) or predicted using finite element analysis (FEA) (Jia et al. 2005;Dickinson et al. 2017). FEA has been identified as a potential tool to assist the prosthetist in their design process, by providing a prediction of fit prior to manufacture (Zhang et al. 1998). However, there are substantial barriers to clinical implementation of these techniques including difficulty in obtaining imaging data, lengthy solver times for the models and the need for a trained user to develop and interpret the FE model (Dickinson et al. 2017). Further, despite the first FE model of a lower limb amputee being published in 1988 (Reynolds 1988), research in this field has not advanced at the rate of many implanted prosthetic devices where tools to simulate the variation in performance across a population are well established (Bryan et al. 2010;Taylor and Prendergast 2015;Ragkousis et al. 2016) or in the prediction of sub-dermal soft tissue strains during seating (Al-Dirini et al. 2016;Luboz et al. 2017). This provides the motivation and objective for the present study, which aimed to develop a surrogate model to allow equivalent predictions to single FEA solutions, across a broad population of anatomical, surgical and design variability, with sufficiently reduced computational expense for clinical use.
Baseline FE model
The baseline FE model was generated from a single MRI scan of a unilateral transtibial residual limb (MAGNETOM Spectra, Siemens Healthcare GmbH, Germany; 3.0 mm slice thickness, 0.5 mm in-slice pixel size, T1-weighted), who provided informed written consent (Fraunhofer IPA #2016_BLM_0009), obtained with secondary data ethical approval (ERGO#29927). The bones, a simplified cartilagemeniscus structure, and the patellar tendon were segmented (ScanIP N-2018.03, Synopsys Inc., USA, Fig. 1a), and other soft tissues were treated as a single body. The meniscus layer was used to facilitate load transfer between the tibia and femur, although sliding was not permitted. The FE mesh was generated with 40272 quadratic tetrahedral elements and imported into an FEA solver (ABAQUS 6.14, Dassault Systèmes, Vèlizy-Villacoublay, France). A segmented prosthetic liner was meshed around the residuum with 5882 structured hexahedral elements. Subsequently, a baseline socket shape was extracted from the external surface of the liner, representing a total surface bearing design (TSB), meshed with 1851 quadrilateral elements. The limb-liner interface was tied, and a Coulomb friction model with coefficient of friction of 0.5 was defined at the liner-socket interface (Cagle et al. 2018).
Socket donning was simulated under displacement control, to generate initial interference pressure and shear between the limb and the socket, from an initial distance of 20 mm. Following donning, a 400 N axial load at the base of the socket (representative of standing) was applied (Fig. 1e). The proximal cut surfaces of the femur and tendon were constrained in all degrees of freedom. The model was solved using implicit analysis, and all loading conditions were static.
Residuum shape population model
A statistical shape model (SSM) was used to introduce population-representative morphological variation into the FE model. SSM has previously been used extensively to characterise shape variation in biological tissues across an anatomical population (Barratt et al. 2008;Bryan et al. 2010;Woods et al. 2017).
For the present study, 30 surface scans of anonymised rectified transtibial plaster casts were used to generate a principal component analysis (PCA) model. These surface scans were aligned and registered to the external surface mesh of the limb extracted from the MRI scan according to a previously verified methodology using the open-source AmpScan package (Dickinson et al. 2016). The 30 aligned and registered scans, as well as the mesh extracted from the MRI scan, were used as input data for the PCA model. The PCA model was developed using singular value decomposition on a mean centred dataset of mesh vertex locations (Galloway et al. 2012).
The first two PCs of the SSM (Fig. 2a) were found to be dominated by residuum length (i.e. the surgical amputation height) and profile (i.e. how conical or bulbous the limb is) which represented 91% of the population variance (83% PC1, 8% PC2). These two PCs were selected to introduce surgical and anatomical variation into the FE model, respectively. Higher PCs were neglected as they included socket rectification features which were not relevant to this study. For the parametric FE model, the weights of PCs 1 and 2 were constrained within the range of ± 1 standard deviation, σ, about the population mean, while the weights of PC 3 onwards were fixed at the original values of the MRI scan's baseline mesh shape (Fig. 2b).
Parametric FE model
The FE model was parameterised using seven input variables (Table 1). Four represented morphological variability of the residuum, and three represented the prosthetic socket design.
The four morphological variability parameters were defined using the two statistical shape model PCs described above, soft tissue stiffness and the tibia length relative to the residuum length, where 0% represented the same relative length Fig. 1 Flowchart of the developed workflow. a Segmentation of the MRI scan and creation of the FE mesh, b SSM from PCA of 30 surface scans, c parametric model of TSB socket design, showing the three design variables used to control the press fit at the proximal, mid and distal regions, d Latin hypercube sampling plan of the seven input variables to the parametric model, e application of model boundary conditions including the socket donning and loading, f solution of the FE models as training data, highlighting the regions of interest across the limb, g creation of the surrogate model based on the FE simulations. Dots denote the training data, and the surface shows the fitted function as the baseline model. The soft tissue stiffness was defined using a linear range of elastic modulus values between 35 and 55 kPa, with the Poisson's ratio fixed at 0.49. These bounds were selected to cover the range between stiff, flaccid muscle and contracted muscle (Portnoy et al. 2009b). This stiffness was converted to an equivalent neo-Hookean material to model the nonlinear behaviour of soft tissue (Palevski et al. 2006): (1) where E and v are the elastic modulus and Poisson's ratio and C 1 and D 1 are the constitutive parameters of the slightlycompressible neo-Hookean strain energy density function, W , given by: where I 1 is the deviatoric strain invariant defined as , the deviatoric stretches are given by i = J −1∕3 i , and J is the total volume ratio. The prosthetic liner was also modelled as a hyperelastic material, while the bones, tendon and socket were all modelled as linear elastic (Table 2).
Three variables were used to define the shape of the socket, and represented the 'press fit' at proximal, mid and distal portions of the socket between − 2% and + 6%, defined as a percentage reduction in the radial distance of each node to the first principal axis of the tibia, calculated from the first PC of the mesh nodes (Fig. 1c). These design variables represented a simplified, parametric model of a TSB socket (Fernie and Holliday 1982;Staats and Lundt 1987).
Volumetric mesh morphing
Population variability was accounted for in the FE model by morphing the volumetric FEA mesh (Fig. 3). Mesh morphing was performed using radial basis functions (RBFs) based on the technique proposed by Forti and Rozza (2014). While a more comprehensive description can be found in their paper, a summary is detailed below. The matrix containing mesh nodal coordinates was morphed into their new coordinates by displacing a matrix of control points c to new coordinates c .
where was a vector of the weights of the basis functions , and vector c and matrix were the parameters of the linear function included to express rigid translation/rotation.
The mapping function is defined between the initial position of the control points c and the final position c using RBFs by evaluating , enabling the weights and linear transformation terms c and to be calculated by solving a set of linear equations. The mesh transformation is then defined by evaluating the RBFs at ‖ ‖ ‖ i − C j ‖ ‖ ‖ and calculating from the pre-computed weights and linear transformation terms. Their method required no orthogonal projection or search algorithm and was not computationally intensive unless an extraneous number of control points was used.
A multi-quadratic biharmonic spline RBF was used, defined by (x) = √ x 2 + r 2 where r represents the scaling factor controlling the basis shape. To resolve the morphing of both the bone surface and the residual limb surface, the mesh morphing defined in Eq. (4) was performed in two steps. To morph the limb mesh X , two sets of control points were defined across the bony structures c, bone and the limb surface c, limb , and the following procedure was used (Fig. 3): 1.
c, bone was displaced to c, bone to represent the new bone length and was used to morph X and c, limb into their new locations (Fig. 3b) 2. The displacement field for c, limb was defined by registering the control points onto the new limb surface from the SSM to generate the new locations c, limb 3.
was then morphed a second time using c, limb into the final locations (Fig. 3c).
The meshes of the liner and socket were morphed based purely on the displacement field of the new limb surface from the SSM (Fig. 3d).
Kriging surrogate model
Surrogate modelling enables fitting a continuous function to a set of training data across a multi-dimensional design space. New data points from the surrogate model can often solve several orders of magnitudes faster than expensive training data generation process, such as FE analyses. A full description and mathematical derivation of surrogate modelling, in particular Kriging-based models, can be found in Forrester et al. (2008).
The seven input variables were normalised into a unit hypercube. Latin hypercube sampling was used to generate the optimal distribution for the selected number of training data points (Morris and Mitchell 1995). A Kriging surrogate model was constructed from the outputs of the training data using the open-source pyKriging package (Paulson and Ragkousis 2015). The Kriging model was used over alternate RBFs due to its robust ability to model nonlinear behaviour, and enabled the expected error in the surrogate function to be calculated. A sensitivity analysis was performed between 25 and 200 points to determine the number of training data points required to accurately represent the input space ( Table 2) for each of the model outputs, based on a test dataset of 75 points.
FE model outputs
Pressure and shear at the liner-prosthetic socket interface were extracted from regions of interest (ROI) at the residuum tip, tibial tuberosity, fibula head and posterior calf (Jia et al. 2005, Fig. 1f). Sub-dermal soft tissue minimum principal strains were extracted around the soft tissues overlying the bony tibial prominence (Portnoy et al. 2009a). For all metrics, the 95th percentile magnitude was used across the values in the region of interest to quantify high values of pressure which cause socket discomfort while removing any mesh artefact stress peaks which may erroneously occur in the FE model. These metrics were used as the training data to construct the surrogate model.
PCA-Kriging and real-time visualisation
In addition to localised predictions of biomechanical load at key regions over the residual limb from the surrogate model, a PCA-Kriging model was used to predict the fullfield pressure and shear (Buljak 2010). Using the same formulation as the SSM, a PCA model was constructed from the training data pressure and shear values of all liner-socket interface nodes, named a statistical output model (SOM). An individual surrogate model was constructed for each of the first 20 PCs from the SOM, which represented 99.9% of its variance. This approach was used instead of solving the surrogate on each node of the mesh, reducing the time to compute the Kriging models. This enabled a new full-field prediction, facilitating real-time visualisation of the model.
Surrogate model
Numerical convergence was achieved for all the FEA simulations within approximately 30 mins per simulation (Intel Core i7-4790, 3.60 Ghz, 24 GB RAM). New data points from the ROI surrogate models were evaluated in 1.6 ms, representing an increase in solver speed of ∼ 10 6 times. The mesh morphing algorithm preserved quality throughout compared to the baseline mesh generated by ScanIP as demonstrated by the convergence of all the models. The mean and minimum Jacobian, which measure the deviation from the ideally shaped element were 0.56 ± 0.01 and 0.06 ± 0.02, respectively, across all meshes.
Sensitivity analysis of the surrogate model demonstrated that the limb ROIs required different numbers of training data points (Fig. 4). The correlation between the training and observed data was very high, with r 2 > 0.9 for all surrogate models apart from the 25 training data point model. However, analysis of the normalised rootmean-square error (NRMSE) demonstrated that there was still error in surrogate predictions. The highest error was observed at residuum tip, with an NRMSE of 8% for 50 training data points, falling to 4% for 150 data points. Further, the surrogate often predicted infeasible values of pressure less than 0 at the residuum tip due to difficulties in fitting a smooth function to the design space. The fibula head and tibial tuberosity pressure was predicted with an NRMSE of 4% for 50 data points.
The PCA-Kriging model enabled the output data to be reduced from 2977 to 20 data points. As such, only 20 surrogate models had to be computed and solved to predict the full-field output data. This facilitated real-time computation of the full-field pressure and shear data (44 ms). This was packaged into a custom graphical user interface to enable visualisation of the full-field data.
Effects of anatomical variability
To investigate the effects of anatomical variability on the biomechanical response of the residual limb, the socket design press fit was fixed at + 1.0%. The residuum morphology was observed to affect the response at all the interrogated ROIs.
Shorter residual limbs were predicted to generate higher residuum tip pressures and distal tibia soft tissue strains, as well as lower posterior calf shear (Fig. 5). Longer, more bulbous limbs were predicted to experience lower pressures over both the tibial tuberosity and fibula head.
The magnitude of soft tissue compressive strain and the soft tissue modulus was closely coupled, with the lower modulus resulting in substantially higher soft tissue strain (Fig. 6). Increasing the relative tibia length was also observed to generate higher soft tissue strain. Conversely, the tissue modulus only had a minor influence on the interfacial pressure and shear.
Patient-specific socket design
Case A represents a short, conical residuum with a long tibia and low tissue modulus (Figs. 7a, 8a); Case B is short and bulbous, with a short tibia and stiff soft tissue (Figs. 7b, 8b); Case C is long and conical, with a long tibia and low tissue modulus (Figs. 7c, 8c); Case D is long and bulbous, with a short tibia and high soft tissue modulus (Figs. 7d, 8d). The underlying shape of the design space was consistent for all cases, whereby increased socket press fit resulted in a reduction in the pressure at the residuum tip to zero and an increase in pressure at the tibial tuberosity and fibula head. However, past the threshold press fit where the residuum tip pressure reached zero, the tibial tuberosity and fibula head pressure continued to increase with press fit. Minimising the residuum tip pressure also reduced the distal soft tissue strain. The residuum tip pressure plateaued at a maximum when the press fit was below 1%. Oversizing the socket (i.e. negative press fit) was shown to maximise residuum tip pressure and distal soft tissue strain while minimising longitudinal shear at the posterior calf.
Discussion
This study presents the first use of a parametric, real-time, FEA-driven model to explore the relationship between residual limb morphology, soft tissue compliance and prosthetic socket design. This allows visualisation of the underlying mechanics between a subset of variables that the prosthetist considers during the patient-specific socket design process. The ability to sweep across the design space enables the variability within this system to be predicted quantitatively, which would not be feasible using experimental techniques on such a scale.
This also demonstrates a meaningful application of SSM applied to transtibial amputated residuum surface shapes to characterise the variation in geometry across a population. The first two PCs used in this study were found to contain only gross limb shape variability which was desired to inform the parametric FE model with a representative population. These PCs have previously been used with linear discriminant analysis as a classification technique between residual limb shapes (Worsley et al. 2015). In contrast to many SSMs which only capture anatomy and sometimes pathology variation (Babalola et al. 2008), the first PC in this Fig. 4 Regression analysis of the surrogate models with different numbers of training data points. In each plot, the x-axis gives 75 observed data points from the simulations and the y-axis gives the predictions from the surrogate model for the corresponding observed data points model corresponded to a surgical variation of amputation height. The SSM was constructed from scans of rectified casts and thus exhibited non-anatomic socket design features such as the proximal-posterior 'backslab' build-up for hamstring relief during knee flexion. These were removed in the present model by using the MRI baseline mesh's PC scores for all except PCs 1 and 2.
Further, the use of RBFs for mesh morphing enabled simple integration with the SSM model, allowing the tissue nodes to be displaced while the bone was fixed. As this method relied on solving linear equations instead of requiring a PDE solver as in other mesh morphing methods (Bryan et al. 2010), computational efficiency was achieved while preserving mesh quality.
The model's sensitivity analysis demonstrates the complexity of pressure prediction at the residuum tip, particularly at small press fits. This was supported by regression analysis, where the highest NRMSE was observed at the residuum tip. When the residuum tip pressure was close to or at zero, the surrogate would often predict negative pressure values. This effect is due to the shape of design space where there is a sudden discontinuity at zero, to which the Kriging model attempts to fit a smooth function. Increasing the number of training data points was shown to reduce this effect.
The nonlinearity of the model response was particularly apparent at the residuum tip, whose load bearing ability is a key consideration in socket design (Persson and Liedberg 1982). The surrogate predicts that the magnitude of this load is highly sensitive to both the socket design and morphological variables of the model. This model indicates that shorter residual limbs will result in higher interface pressures as there is less area to distribute the same load. Furthermore, in this case, a tighter fitting socket would be required to off-load the residuum tip. Short limbs are known to be more challenging to fit, and the higher pressures predicted support this (Bowen et al. 2005). Longer, more bulbous limbs were predicted to decrease the pressure over the bony prominences of the residual limb. This is likely due to the increased soft tissue coverage and greater surface loading area contributing to a distribution in the pressure over the limb.
This model also highlighted the interplay between different biomechanical metrics. An increase in longitudinal shear around the main body of the limb was shown to suspend the limb within the socket under load, leading to a reduction in pressure at the residuum tip. This reduction in end bearing also reduced the internal strain around the distal tip of the tibia. Conversely, oversizing the socket (i.e. negative press fit), reduced the bulk shear and increased the tip pressure and soft tissue strain. Further, oversizing the socket caused the tip pressure and soft tissue strain to plateau at a maximum, suggesting the limb had reached a state of near full-end bearing. The effect of an oversized socket is observed clinically, where tissue atrophy results in the residual limb losing volume .
While the soft tissue modulus was shown to have a minor effect on the interface pressure and shear, it was strongly related to the soft tissue strain at the distal tip. Greater magnitudes of relative tibia length (i.e. lower soft tissue coverage over the distal tibia bony prominence) were also shown to increase tissue strain. FE models of amputated lower limbs have been proposed as damage models to predict deep tissue injury based on exposure to strain over time (Portnoy et al. 2009a;Ramasamy et al. 2018). These results demonstrate the importance of accurate characterisation of the soft tissue stiffness, as this parameter will strongly influence any strainbased prediction of injury. Further, as residual limbs become established, they go through an adaptive process and stiffen. ( v 5 = v 6 = v 7 = +1.0 ). The x-axis for each plot corresponds to residuum length, v 1 , and the y-axis to residuum profile, v 2 1 3 Fig. 6 Distal soft tissue strain for different values of tibia length, v 3 , and soft tissue modulus,v 4 , for a + 1% press fit socket ( v 5 = v 6 = v 7 = +1.0 ). The x-axis for each plot corresponds to residuum length, v 1 , and the y-axis to residuum profile, v 2 Fig. 7 Effects of prosthetic socket design on the biomechanical response of the limb in each of the ROI. The x-axis of each represents the proximal press fit, v 5 in %, and the y-axis the distal press fit, v 7 in %. The mid press fit is the average of the proximal and distal press fit, v 6 = 0.5 v 5 + v 7 To this end, FE models should be used with caution when defining an absolute threshold of injury. A more appropriate application may be on a comparative basis, for example identifying those patients at most risk and evaluating the range of corresponding prosthesis options.
Limitations
The present study only considered the effect of uniaxial loading to replicate a double-leg stance. The interaction between the residual limb and prosthetic socket is a highly dynamic process (Tang et al. 2015). This has been simplified to contain the dimensionality of the study. However, future studies should incorporate either quasi-static or fully dynamic load cases from gait analysis, and could use surrogate modelling to characterise the effects of loading variability (Galloway et al. 2013) and misalignment (Kobayashi et al. 2013). The bone scaling used in this study was based on linear scaling from the tibial tuberosity; therefore, it does not account for the variation in bone profile across populations. Future studies could use a SSM of the tibia to introduce population-representative variation in bone morphology. The coefficient of friction between the liner and socket was based on the literature data rather than experimental testing, which will affect the shear forces transmitted at the interface. In addition, this study only considered a simplified total surface bearing socket design with the press fit controlled by three points along its length. Future studies may also consider other, more complex parametric models of socket design. Such a model would be able to incorporate the local rectifications that are necessary to reduce pressure over the bony prominences of the residual limb, typically adopted in design principles such as patella tendon bearing sockets. In the present study, these local rectifications were not considered, leading to high pressures over the bony prominences at high levels of global press fit.
Pressure and shear sensors ) and lab-based residuum-socket simulators ) measure the interaction between the residual limb and socket and could be used to reinforce the findings of this study. TSB sockets have been predicted to produce pressure across the limb between 50 and 100 kPa during gait (Dumbleton et al. 2009). This is higher than the simulated pressures in the present study, likely due to the higher forces and moments produced during gait. While it would not be feasible to validate every training data point due to the invented residuum shapes, need to fabricate each socket design and the time taken to run the physical tests, a limited number of studies could be performed to validate some of the underlying mechanisms observed in the model.
Clinical application
The surrogate model would facilitate automated socket design for an individual lying within the training population using optimisation strategies in a relatively short time; 10,000 surrogate function calls could be evaluated in around 5 min. However, caution should be exercised with such an approach. The selection of an appropriate objective function is challenging, as it requires relating biomechanical outputs such as pressure and shear stresses to clinically relevant metrics such as comfort, stability and highly subjective pain thresholds. Further, during socket fitting, local modifications must be made in case of sensitive regions associated with soft tissue injury or neuroma which are identified during limb assessment but would not be in the computational model. Such an approach would also neglect important psychological aspects during the socket fitting process (Pezzin et al. 2004). This reinforces the importance of a skilled prosthetist within the design of the socket.
Alternative workflows using a FEA solver coupled to a CAD package have previously been proposed (Goh et al. 2005;Colombo et al. 2013). The method presented in this paper, however, overcomes many of the data, software, equipment, computational expense and training barriers associated with performing an FE simulation for each new data point.
To leverage both the skill and experience of the prosthetist with the biomechanical predictions of the model, a PCA-Kriging approach was used for real-time, full-field visualisation of the surrogate. It is anticipated that such a tool could be integrated with an existing CAD socket design software to support the prosthetist. Residuum shape could be matched to the SSM through surface scans, which are already taken in-clinic, bone length through use of planar X-rays, and tissue stiffness using indenters (Petron et al. 2016). The socket design variables would then be selected by the prosthetist within the design process. Such a tool could also enhance user engagement in prosthesis design, which may deliver improved confidence as has been reported anecdotally with CADCAM methods.
Conclusion
This study's objective was to develop a surrogate model to allow equivalent predictions to single FEA solutions, across a broad population of amputated residual limb anatomical Fig. 8 Interface pressure profiles for the four cases from the population with four different socket designs. Each press fit socket is designed so v 5 = v 6 = v 7 . Four magnitudes of press fit corresponding to − 1, 1, 3 and 5% were selected. A 45 • anterior-lateral view is presented, to visualise the pressure at the tibial tuberosity and fibula head ◂ and surgical variability, and prosthetic socket designs, with sufficiently reduced computational expense for clinical use. The presented framework represents a substantial step towards using quantitative tools to predict the performance of prosthetic socket design prior to manufacture. This study represents the first use of statistically driven morphological variation and parametric prosthetic socket design in predicting the biomechanical response of the residual limb to socket loading. Further, the use of PCA-Kriging to produce a realtime, full-field rendering of the pressure and shear distribution on the residual limb demonstrates a method by which the surrogate could be implemented in a clinical setting. Such a tool would provide the prosthetist with a real-time prediction of socket fit embedded within their CAD package, as part of a more informed socket design process.
Compliance with ethical standards
Conflict of interest None of the authors has any conflict of interest to declare.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creat iveco mmons .org/licen ses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. | 2019-07-02T13:47:55.001Z | 2019-06-29T00:00:00.000 | {
"year": 2019,
"sha1": "9d63b502f77cbf6394716002792bb2de0f15ea53",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s10237-019-01195-5.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "25ecab429ee777ce630c107c8764b0060cb46969",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Medicine",
"Computer Science"
]
} |
243201818 | pes2o/s2orc | v3-fos-license | Other musculoskeletal pain is associated with new-onset low back pain: A longitudinal study among survivors of the Great East Japan Earthquake
Low back pain (LBP) is a common health problem experienced after natural disasters. LBP is often concurrent with other musculoskeletal pain; however, the effects of preexisting musculoskeletal pain on LBP are not clear. The purpose of this study was to elucidate the influence of other musculoskeletal pain on new-onset LBP among survivors of the Great East Japan Earthquake (GEJE). A longitudinal study was conducted with survivors at three and four years after the GEJE (n = 1,782). Musculoskeletal pain, such as low back, hand and/or foot, knee, shoulder, and neck pain, were assessed with self-reported questionnaires. New-onset LBP was defined as LBP absent at three years but present at four years after the disaster. Musculoskeletal pain except for LBP at three years after the GEJE were categorized according to the number of pain sites (0, 1, ≥ 2). Multiple regression analyses were performed to calculate the odds ratio (OR) and 95% confidence interval (CI) for new-onset LBP due to the other musculoskeletal pain.
Abstract Background
Low back pain (LBP) is a common health problem experienced after natural disasters. LBP is often concurrent with other musculoskeletal pain; however, the effects of preexisting musculoskeletal pain on LBP are not clear. The purpose of this study was to elucidate the influence of other musculoskeletal pain on new-onset LBP among survivors of the Great East Japan Earthquake (GEJE).
Methods
A longitudinal study was conducted with survivors at three and four years after the GEJE (n = 1,782). Musculoskeletal pain, such as low back, hand and/or foot, knee, shoulder, and neck pain, were assessed with self-reported questionnaires. New-onset LBP was defined as LBP absent at three years but present at four years after the disaster. Musculoskeletal pain except for LBP at three years after the GEJE were categorized according to the number of pain sites (0, 1, ≥ 2). Multiple regression analyses were performed to calculate the odds ratio (OR) and 95% confidence interval (CI) for new-onset LBP due to the other musculoskeletal pain.
Conclusions
Preexisting other musculoskeletal pain was associated with new-onset LBP among survivors in the recovery period after the GEJE. Attention should be paid to other musculoskeletal pain sites to treat and prevent LBP after natural disasters. 4 Background Low back pain (LBP) is one of the most frequent health problem worldwide [1]. It leads to disability and limitation of activities in daily life [2]; therefore, gaining an understanding of the factors related to LBP is important. Risk factors of LBP include age, sex, obesity, smoking, psychological distress, and sleep disturbance [2][3][4][5]. Further, musculoskeletal pain often occurs at multiple sites, and single-site pain is considered to increase the risk for pain at other sites [6]. Indeed, some reports have found that LBP occurs concurrently with other musculoskeletal pain [7][8][9]. Most of these studies were cross-sectional; therefore, the influence of preexisting musculoskeletal pain on new onset of LBP is not clear.
Musculoskeletal pain, including LBP, are reported to increase after natural disasters [10].
The Great East Japan Earthquake (GEJE) accompanied by a devastating tsunami attacked the north-eastern coastal areas of Japan in March 11, 2011 [11]. This terrible disaster resulted in serious damage to these areas, and a long period of reconstruction. High prevalence of LBP has been also reported after the GEJE [12,13], and previous longitudinal studies have revealed associated factors, such as subjective economic hardship and sleep disturbance [5,13]. High prevalence of other musculoskeletal pain was also seen in the recovery phase after the GEJE, and almost half of the survivors had musculoskeletal pain at multiple sites [14]. Since musculoskeletal pain co-exists at multiple sites, we speculated that increased other musculoskeletal pain could be associated with new onset of LBP and lead to high prevalence of LBP after natural disasters. The aim of this study was to examine the influence of musculoskeletal pain other than LBP on new-onset LBP in the recovery period after the GEJE longitudinally. For this purpose, we analyzed panel data of surveys conducted three and four years after the GEJE.
Participants
We hypothesized that other musculoskeletal pain could be associated with new onset of LBP after natural disasters. A panel study was therefore conducted with the GEJE survivors living in the severely damaged coastal areas, including Ogatsu and Oshika areas in Ishinomaki City, and Wakabayashi Ward in Sendai City, Miyagi prefecture, Japan. The surveys began three months after the GEJE and were administered every six months. The first study population included residents registered in the Residential Registry of the Ogatsu and Oshika areas and survivors living in prefabricated housing in the Wakabayashi Ward. From November 2013 to February 2014, three years after the GEJE, the residents (aged 18 years or over) who were registered in the Residential Registry of Ogatsu and Oshika areas, and the survivors who had participated in the previous survey in Wakabayashi Ward, were recruited (n = 6,396). Self-reported questionnaires and informed consent forms were mailed to these residents and a 44.6% (2,853/6,396) response rate was obtained. Among those, the participants who already had LBP were excluded (n = 663). The remaining participants were followed from November 2014 to February 2015, four years after the GEJE, and an 81.4% (1,782/2,189) follow-up rate was obtained for this period. Finally, a total of 1,782 participants were included in this study (Fig. 1). This study was approved by the institutional review board of our university (approval number: 201192) and was performed in accordance with the ethical standards as laid down in the 1964 Declaration of Helsinki and its later amendments or comparable ethical standards.
Musculoskeletal pain
Musculoskeletal pain was assessed using self-reported questionnaires based on Comprehensive Survey of Living Conditions. The questions were: "Have you had symptoms in the last few days? If yes, please place a mark next to all your symptoms." The examples of choices were palpitation, dizziness, diarrhea, and musculoskeletal symptoms such as low back, hand and/or foot, knee, shoulder, and neck pain [14]. The outcome of interest was new-onset LBP, which was defined as LBP absent at three years (first period), and present at four years after the GEJE (second period). The main predictor was musculoskeletal pain except for LBP at the first period which included hand and/or foot, knee, shoulder, and neck pain. Musculoskeletal pain except for LBP was categorized into three groups according to the number of painful sites (0, 1, ≥ 2).
Statistical analysis
Univariate and multivariate logistic regression models were used to calculate odds ratios (OR) and 95% confidence intervals (95% CI) for new-onset LBP according to the number of musculoskeletal pain sites except for LBP in the first period. Variables included in the analysis were sex, age (< 65 or ≥ 65, years), BMI (< 18.5, 18.5 to < 25, ≥ 25, or unknown), living area (Ogatsu, Oshika, or Wakabayashi), smoking habits (nonsmoker, smoker, or unknown), drinking habits (non-drinker, < 45.6 grams of alcohol per day, ≥ 45.6 grams of alcohol per day, or unknown), comorbid conditions (absence or presence of each comorbid conditions), working status (unemployed, employed, or unknown), walking time per day (< 30 min, 30 min to < 1 h, ≥ 1 h, or unknown), living status (living in the same house as before the GEJE, prefabricated housing, new house, others, or unknown), subjective economic conditions (normal, a little bit hard, hard, very hard, or unknown), psychological distress (absence, presence or unknown), sleep disturbance (absence, presence, or unknown), and social isolation (absence, presence, or unknown). We further divided the participants into subgroups by age (< 65 or ≥ 65 years) or sex (male or female). ORs and 95% CIs for new-onset LBP were calculated in the same manner. For the stratified analysis, multiplicative interaction between musculoskeletal pain except LBP and age or sex were tested using the Wald test. In addition, the ORs and 95% CIs for newonset LBP according to each musculoskeletal pain except LBP in the first period were evaluated. We included the same variables (Model 1) and added each musculoskeletal pain such as hand and/or foot, knee, shoulder, and neck pain as covariates (Model 2). All statistical analyses were performed using SPSS 24.0 (SPSS Japan Inc, Tokyo, Japan). A p value of < 0.05 was accepted as statistically significant.
Results
Baseline characteristics of the participants are presented in Table 1. Among the 1,782 participants, 1,343 (75.4%) had 0, 283 (15.9%) had one, 156 (8.8%) had two or more musculoskeletal pain regions except for LBP in the first period, respectively. The participants who reported having musculoskeletal pain except for LBP were more likely to be female and older. They were also more likely to have high BMI, comorbid conditions such as hypertension and myocardial infarction, short walking time, subjective economic hardship, psychological distress, sleep disturbance, and social isolation ( Table 1). The rate of new-onset LBP was 14.1% (251/1,782). The crude and adjusted ORs and 95% CIs for new-onset LBP according to the number of musculoskeletal pain regions except for LBP are shown in Table 2. Musculoskeletal pain except for LBP was significantly associated with new-onset LBP in the crude and adjusted analyses. Using "0" as a reference, adjusted ORs and 95% CIs for new-onset LBP were 1.69 (1.17-2.42) in "1" and 2.85 (1.86-4.36) in "≥ 2" musculoskeletal pain regions except LBP (p for trend < 0.001) ( Table 2). The results of stratified analysis are shown in Table 3. Musculoskeletal pain except for LBP was significantly associated with new-onset LBP in each group. The association was stronger in older (≥ 65 years) compared with younger (< 65 years) participants(p for trend: < 0.001 in "≥ 65 years" and 0.026 in "< 65 years"), and in male compared with female (p for trend: < 0.001 in male and 0.011 in female). There was no statistically significant multiplicative interaction between musculoskeletal pain regions except for LBP and age or sex (Table 3).
For each musculoskeletal pain site, hand and/or foot, knee, shoulder, and neck pain were all associated with new-onset LBP in Model 1, and the association was also significant for knee and neck pain in Model 2. The adjusted ORs and 95% CIs (p value) for new-onset LBP (Table 4). item has a limited number of respondents, the actual number is not necessarily in accordance with the total. **22.8 g of alcohol amount to 1 go or traditional unit of sake (180 ml), which also approximates to two glasses of wine (200 ml), or beer (500 ml) in terms of alcohol content. Categorical values are presented as numbers and percentage (%). GEJE: Great East Japan Earthquake
Discussion
The present study revealed that preexisting other musculoskeletal pain was associated with new-onset LBP among the survivors in the recovery period after the GEJE. Further, the effect was stronger with musculoskeletal pain that occurred at multiple sites.
Some cross-sectional studies have shown that musculoskeletal pain often occurs at multiple sites, such as shoulder, elbow, knee, and low back [18,19]. Further, other authors reported a significant association between LBP and neck or knee pain [7][8][9]. A small number of longitudinal studies have investigated the effect of musculoskeletal symptoms on LBP onset. Smith et al. reported that preexisting pain resulting from arthritis or injury was associated with new onset of LBP [20]. Papageorgiou et al. showed that musculoskeletal pain history was a predictor of subsequent LBP [21]. The results of the present study reveal that the existence of musculoskeletal pain is associated with subsequent onset of LBP, which corresponds with these reports. There has been speculation in the literature about the association between concurrent pain at different sites. Pain at one site can negatively affect motion or posture and place additional burden on the other parts of the body [22]. The factors associated with one pain can also be related to the other pain [23]. In addition, one pain causes central sensitization which can result in the development of pain at other sites [8]. These conditions can explain the association between preexisting musculoskeletal pain and new-onset LBP. Further, to our knowledge, this is the first study to report that the effect of musculoskeletal pain on onset of LBP becomes stronger with multisite musculoskeletal pain. Nordstoga et al. reported that LBP with an increasing number of musculoskeletal pain sites tends to have a worse recovery rate, which also supports our results [24]. The association of musculoskeletal pain with LBP is considered to be stronger due to increased pain sites. High prevalence of musculoskeletal pain was reported after the GEJE and many survivors had pain at multiple sites [14]. This is presumed to be one explanation for increased LBP after the GEJE.
Attention should be paid to other musculoskeletal pain sites to treat and prevent LBP after natural disasters.
The stratified analysis according to age and sex categories revealed that the association of other musculoskeletal pain with new-onset LBP was also significant among categories in each group, which showed the robustness of the association in this study. The rate of musculoskeletal pain was higher in participants aged ≥ 65 years compared with those aged < 65 years and the association between the other musculoskeletal pain and LBP was stronger in those aged ≥ 65 years. Generally, musculoskeletal pain, especially multisite pain, is more common among older adults [6,19], and they are considered to be more vulnerable to such pain. Conversely, the rate of musculoskeletal pain was higher in females as compared with males; however, the association of musculoskeletal pain with LBP was stronger in males. Musculoskeletal pain, especially multisite pain, is more common among females [18,19], and various factors may affect such pain, which is assumed to lower the association of musculoskeletal pain with LBP in females. Further, in each musculoskeletal pain site, musculoskeletal pain such as hand and/or foot, knee, shoulder, and neck pain were all associated with new onset of LBP in Model 1. Some authors reported the association between LBP and hand or foot [25], knee [9,23], shoulder [25], and neck pain [8] in cross-sectional studies. There have also a small number of longitudinal studies regarding the association between LBP and each musculoskeletal pain, and preexisting LBP was reported to be associated with onset of knee [22] and neck pain [7]. To our knowledge, the present study was the first to report that preexisting hand and/or foot, knee, shoulder, and neck pain were individually associated with onset of LBP, except for the effect of other musculoskeletal pain. Further, even if the effect was considered, knee and neck pain were also associated with newonset LBP. There is a closed kinetic relationship between the knee and lower back [9], and dysfunction of the knee joint due to pain can easily result in compensation and pain in the lower back. The spine undergoes a similar ageing process, including genetic influences and risk factors to pain in the neck and lower back [7], which can cause LBP following neck pain. The association of knee or neck pain with LBP was considered stronger compared to other pain such as hand, foot, and shoulder pain. On the other hand, the association of hand and/or foot, and shoulder pain with LBP was not significant when considering the effect of the other musculoskeletal pain. Musculoskeletal pain except for LBP may be also associated with other pain, and that association may affect the results.
Further, survivors who had LBP in the first period were excluded from this study because the purpose of this study was to assess the effects on LBP onset of musculoskeletal pain except LBP. The survivors who already had both LBP and other musculoskeletal pain were excluded, which could reduce the association.
This study had several limitations. First, the questionnaires and informed consent forms were mailed to the participants and the response rate for the first period was not high.
Responders might be healthier than non-responders, which could reduce the rate of musculoskeletal pain. Second, musculoskeletal pain was assessed using a self-report questionnaire, which included five pain sites but did not include other pain sites such as hip or elbow. Pain at these sites could also affect the onset of LBP and were not assessed in this study. Finally, this study did not have a control group because the GEJE destroyed vast areas. It was difficult to assess the difference between disaster-stricken and unaffected areas.
In conclusion, preexisting musculoskeletal pain at other sites was associated with newonset LBP among survivors in the recovery period after the GEJE. | 2020-01-23T09:09:31.633Z | 2020-01-21T00:00:00.000 | {
"year": 2020,
"sha1": "003550fc3f5622cab5b9effbd2dc25287b2534bd",
"oa_license": "CCBY",
"oa_url": "https://www.researchsquare.com/article/rs-12011/latest.pdf",
"oa_status": "GREEN",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "971d8ebe720132c5f9374e3f41503d02f269d725",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": []
} |
237795636 | pes2o/s2orc | v3-fos-license | Comparing sources of stress for state and private school teachers in England
Teaching is understood to be a highly stressful profession. In England, workload, high-stakes accountability policies and pupil behaviour are often cited as stressors that contribute to teachers’ decisions to leave posts in the state-funded sector. Many of these teachers leave state teaching to take jobs in private schools, but very little is known about the nature of teachers’ work in the private sector. This research addresses this gap in knowledge and compares the sources of stress experienced by 20 teachers in the state sector to those of 20 teachers in the private sector. The paper is based on qualitative data from a larger study. It analyses data collected in interviews and focus groups with classroom teachers and middle leaders working in mainstream primary and secondary phase education in England. The results emphasise state school teachers’ acute distress in relation to workloads driven by accountability cultures. In comparison, private school teachers report less intense experiences of work-related stress, but some identify demanding parents as a concern. The research’s novelty lies in this comparison between sectors and these sector specific insights may help to focus school leaders’ efforts to improve teaching conditions in both sectors.
Introduction
Teaching in England is recognised as a highly stressful profession (Kidger et al., 2016). As in other high-stakes accountability contexts including Canada, the USA and Australia, teachers' work in England is characterised by heavy workloads and intense scrutiny (Viac & Fraser, 2020;von der Embse et al., 2016). Research acknowledges the link between teachers' working conditions, stress and ongoing teacher retention difficulties in England and elsewhere (CooperGibson, 2018;Viac & Fraser, 2020). Mirroring the USA and Australia, teacher retention is an acute problem in England with an estimated one third of teachers leaving state school teaching within 5 years of qualification (Department for Education, 2019a). While there is sparse evidence concerning private school teacher supply and retention, research from the state sector demonstrates that pupils are unequally affected by teacher supply problems. Those taught in the most disadvantaged schools are more likely to be taught by unsuitably qualified staff and such schools experience greater difficulties in recruiting and retaining teachers (Allen & McIntyre, 2019).
In a quest for a better work-life balance and improved job satisfaction, many teachers leaving England's state schools commence work in other sectors or continue in education but in nonteaching roles (Perryman & Calvert, 2020;Worth et al., 2018). However, around 16% of those who leave state school teaching (and who are not retiring) do in fact remain in the profession, but they assume jobs in private schools (Worth et al., 2015). Although approximately 17% of England's national teacher workforce is employed by private fee-charging schools (Department for Education, 2018aEducation, , 2018c, the work of those teachers is scarcely documented and so there is little knowledge of the way in which the type or intensity of stress compares for teachers across sectors. Heavy workloads, high-stakes accountability environments, pupil behaviour and perceived poor collegial support are acknowledged stressors for state school practitioners (Chaplain, 1995;MacBeath, 2009;Skinner et al., 2021). Limited information is available concerning teacher stress in the private sector, although longer holidays, improved pupil behaviour and better pay are commonly believed to be benefits of working privately (Green et al., 2008). There is some anecdotal evidence that private school teachers might experience stress from parents. Specifically, teachers may encounter 'pushy parents' who place significant demands on teachers' time and expect them to obtain particular academic outcomes with their child (The Secret Teacher, 2015;Ward, 2014). As Peel (2015) suggests, fee-paying parents might anticipate good academic results as evidence of teachers delivering 'value-for-money'. Teachers could experience this pressure for results as stressful, especially as the financial success of private schools is contingent on customer satisfaction.
While stereotypes of well-behaved pupils and the 'pushy parent' abound, there is little researchbased evidence which supports the notion that private school teachers experience more or less stress from parents or pupils -or any other factor -compared to those in the state sector. To address this gap in knowledge, our study undertakes a novel exploration of the sources of stress for private school classroom practitioners and middle leaders and compares these to the experiences of those working in the state sector. Findings are interpreted within their wider policy context and understood through the lens of the job demands-resources theory. The work is of potential significance to school leaders in contexts where teacher stress and retention present as problematic. By identifying sector specific sources of stress, our findings could support those leading government-funded and private schools to develop targeted strategies to improve the working conditions of their teaching staff.
Literature review
Evidence from the Teaching And Learning International Survey (TALIS) shows that an average of 18% of teachers in surveyed countries experience a lot of stress in their work (OECD, 2020). This percentage rises to 30% of teachers when just England is considered, and teachers in Portugal, Australia and the USA also report higher than average stress (OECD, 2020). Other recent estimates support the finding that teacher stress is of concern in England. Worth and Van den Brande (2019) report that around one fifth of teachers in England feel stressed 'most' or 'all' of the time. Kidger et al.'s (2016) study makes the similar finding that around one in five of the surveyed teachers expressed at least some symptoms of depression or anxiety, and that these conditions were linked to teacher absence. New evidence contests claims that teachers' mental health has declined over time and suggests that due to shifting social trends, teachers (like others in the non-teaching population) may now be more likely to report mental health difficulties which in part explains the rise in reported cases of stress, anxiety, depression and other mental health illnesses (Jerrim et al., 2021). Although more research is needed into the longitudinal quantification of teachers' mental health -existing retention, stress and wellbeing literature agrees that for many, teaching feels stressful and that these feelings of strain underpin teachers' decisions to leave the profession (Perryman & Calvert, 2020).
The potential impacts of teacher stress are well-documented across the globe, with studies concerning Norway and the USA linking teacher stress to attrition, teacher absenteeism and crucially -reduced pupil outcomes (Howard & Howard, 2020;McLean & Connor, 2015;Skaalvik & Skaalvik, 2016). There is further evidence that those pupils taught by stressed teachers report lower levels of school satisfaction and have worse views of teacher caring (Ramberg et al., 2020). This link between teacher stress and pupils' experiences of school is particularly concerning because cross-country studies find that teachers working in disadvantaged contexts report higher levels of stress (OECD, 2020). As such, pupils in the most disadvantaged schools may be taught by highly stressed teachers, and this factor could contribute to impaired educational outcomes for these pupils. Therefore, identifying stressors and resourcing teachers to manage the demands of their work could have benefits across the school eco-system and contribute to better teacher mental health, improved retention, enhanced pupil outcomes and greater pupil and parent satisfaction.
What is teacher stress?
Teacher stress is understood to arise from contextual factors within the school setting (Kyriacou & Sutcliffe, 1977). Kyriacou (2001) defines it as: The experience by a teacher of unpleasant, negative emotions, such as anger, anxiety, tension, frustration or depression, resulting from some aspect of their work as a teacher. (Kyriacou, 2001, p. 28).
This broad definition of 'unpleasant, negative emotions' as 'stress' allows a wide understanding of how stress might manifest and be experienced and discussed by individuals. The 'job demandsresources' conceptualisation of stress posits that factors such as pupil behaviour or long working hours only become stressful when staff are inadequately resourced to meet the demands of their job (Bakker et al., 2004(Bakker et al., , 2007. Under this view, job demands are defined as: [T]hose physical, social, or organisational aspects of the job that require sustained physical and/or psychological (i.e., cognitive or emotional) effort on the part of the employee and are therefore associated with certain physiological and/or psychological costs. (Bakker et al., 2007, p. 275). This definition enables an understanding of job demands as tasks, duties and requirements that necessitate a physical, emotional or psychological input from the employee. For teachers, job demands may include excessive marking workloads, a poorly resourced environment, large classes, a heavy teaching timetable or demanding parents. These demands become stressors when the teacher lacks the internal or external resources to support them to manage these strains (Bakker et al., 2007). Research has shown that teachers can be resourced to meet the demands of their work through a range of provision. Borman and Dowling's (2008) meta-study of teacher retention studies found that higher salaries and qualified teacher status are associated with improved retention, whereas Gordon (2020) reports that bespoke mentoring and a supportive collegial environment can foster teacher wellbeing for those in the early career stage. Additionally, government commissioned reports indicate that some workload reduction initiatives might help teachers to meet the demands of their work (Robinson & Pedder, 2018).
Stressors
Work structure. The way in which the school year is structured may contribute to stress. The academic year entails long and intensive days during term-times interspersed by breaks approximately every 6-8 weeks. This intense work pattern may become stressful if teachers do not have adequate opportunity to recovery in the evenings and at weekends (Demerouti et al., 2009;Worth et al., 2018). National survey data indicate that weekend and evening work are commonplace for state school teachers with middle leaders and classroom teachers working on average nearly 13 hours per week during non-school time (Walker et al., 2019). Less is known about the structure of private school teachers' work or if their workloads are associated with stress. However, marketing materials for many private schools emphasise that they offer weekend and evening sports fixtures and school community events which are arranged and supervised by teaching staff (Peel, 2015). It may, therefore, be the case that those working in private schools also experience intense terms with little recovery time available. That said, when holidays are considered -those working privately typically benefit from annual leave of around 16 to 20 weeks (compared to 13 weeks in the state sector) -and this extended leave may provide adequate opportunity to recover from stressful term-time work demands (Griff, 2013(Griff, , 2014. It is worth noting that there is considerable variation in the length of holidays and school days for private school teachers. Those in the private sector might experience long working days with the expectation that they participate in sporting, musical, artistic or social activities in addition to their classroom teaching hours. Those working in boarding schools are likely to have longer school holidays compared to both day school and state school teachers, but they may experience more protracted days with duties and school activities routinely extending into evenings and weekends. For boarding house mistresses/masters (those who live in the boarding houses with pupils), the role may entail responsibility for pupils around the clock. These suggestions are supported by literature from the former teaching union, Association of Teachers and Lecturers (ATL). Its guidance to members lists long working days, after-school commitments, stress from the market forces of the sector and 'pressure from higher parental expectation' as disadvantages of working in the sector in comparison to state-funded schools (ATL, 2014, p. 4).
Workload. Prior research has recognised the quality and quantity of workload as a key factor that contributes to teacher stress (Boyle et al., 1995;Brown & Ralph, 2002;. According to recent national scale surveys, full-time classroom teachers and middle leaders in England work just under 53 hours per week and most consider workload to be a 'very serious' or 'fairly serious' problem (Walker et al., 2019). Evidence gathered prior to the outbreaks of COVID-19 in England suggested that this perception was improving for state school teachers, perhaps owing to the workload reduction initiatives developed and implemented by government, teaching unions and school leaders (Walker et al., 2019). Data from TALIS 2018, which includes a small sample of private school teachers, indicate that teachers of lower secondary aged pupils (approximately aged 11-14) in the private sector work 5 hours per week more than their state counterparts, but that they report higher levels of job satisfaction (Jerrim & Sims, 2019). While workload has been the focus of government ambitions to improve teacher retention in state schools (Department for Education, 2019b), little is understood about private school teachers' comparative experiences of workload or if those who move sectors find that the workload is more or less stressful in one sector compared to the other.
Pupil behaviour. Most models of teacher stress identify pupil behaviour as a key stressor. Boyle et al. (1995), for example, find that pupil behaviour is the factor that explaines the most stress variance for teachers in Malta and Gozo. Pupil behaviour is important to consider because it has been linked to teacher retention. A recent recently a working paper from the Department for Education reports that teachers who perceive pupil behaviour to be poor are more likely to leave their jobs compared to those who hold better perceptions of this factor (Sims & Jerrim, 2020). Other research suggests that strong in-school support can help teachers to manage stress from pupil misconduct and reduce the odds of them leaving their posts (Johnson et al., 2012). There is some evidence from TALIS 2013 that those working in private schools hold better views of the disciplinary environment (Micklewright et al., 2014). Although small numbers of private school teachers were included in TALIS 2013, the findings could be indicative that pupil behaviour is a less pertinent concern for private school teachers compared to their state counterparts.
Class sizes are a further point for consideration when understanding stress from pupil behaviour. Small class sizes are a salient selling point of many private schools, and the Independent Schools Council (ISC), which is the umbrella organisation for most of the mainstream private schools in England, reports an average pupil to teacher ratio of 8.5:1 compared to 18:1 in state schools (ISC, 2019; Department for Education, 2018a). As such, it might be expected that private school teachers experience less stress from pupil behaviour compared to colleagues in the state sector because they have fewer pupils to manage in the classroom.
Parents/guardians. Strained relationships between teachers and parents may be stressors in both state and private school contexts. Survey data collected by ATL (2016) indicate that increasing workloads propelled by pressures from parents and school leadership are causes of concern for private school teachers. Recent research from Ofsted (2019) (the body that inspects all state-funded schools in England) comments likewise that 'relationships with parents can be a negative source of stress' for state school teachers (p.7). The Ofsted report lists parents' unrealistic expectations, frequent emails and parents raising complaints as specific activities that contribute to teacher stress. These findings, although in relation to the experiences of state school teachers, mirror the sources of stress outlined in the ATL report, thus indicating that relationships with parents may be similarly stressful for teachers in both sectors.
Methodology
Data for this study were collected through a series of in-depth one-to-one interviews and focus groups with teachers who taught in mainstream schools for pupils aged 5 to 18. In total, we analysed data from 40 teachers: 20 from the state sector and 20 from the private sector. The data were collected through 12 interviews and 8 focus groups conducted over a 7 month period, 2017 to 2018. All the private school teachers included in this study worked in schools affiliated with the ISC, whose schools account for around 80% of privately educated pupils in England. Both private boarding and day school teachers are incorporated into the study. School type (e.g. day/boarding) and phase (primary/secondary, or equivalents) are indicated in the findings and discussion.
Participants
Interviewees for this study were drawn from the previous stage of data collection, an online questionnaire -the results of which are not published here. Individuals were approached from a variety of school types, phases, regions and job roles. Those willing to participate were interviewed either on the phone or in-person. Focus groups (which were all conducted in-person) were recruited through a combination of methods including: the study's questionnaire, advertising on social media, trade union promotion and through advertising with some of the major organisations affiliated with the ISC. Focus groups contained a maximum of six teachers either from the state or from the private sector.
The convenience method of recruitment meant that some participants were clustered in the same schools. The 20 teachers from the state sector who were included in analysis worked in 18 different schools. The 20 teachers from the private sector were drawn from six different schools (three day schools and three schools with boarding facilities). There were fewer schools represented in the private school sample because approaching teachers through gatekeeper headteachers proved the most effective recruitment method, although this led to clustering. For private school focus groups, this clustering meant that participants were in a discussion group with colleagues and that all but one of the private school focus groups were conducted in a private room on school grounds. The other private school focus group and all state school teacher focus groups were held in neutral spaces such as private rooms in public libraries or community halls, and they contained a mix of teachers from different schools. Private school participants who were interviewed on school grounds may have been less willing to discuss the less positive aspects of their work while in their work environment. To manage this possibility, participants were invited to discuss teaching in general, rather than focusing on their experiences in their specific schools. Participants also were reassured that we would treat their contributions confidentially, and they agreed that to respect each other's confidentiality they would not discuss the content of the focus group with non-participants after the conversation had ended. Despite these efforts to encourage an open discussion, the clustering presents a limitation to the study and while it was an ongoing consideration during analysis, this limitation should be recalled when interpreting results. Table 1 shows the breakdown of participants by sector and the age range that they taught. Age phases are defined differently between the state and private sectors in England and because the nomenclature varies between sectors, for ease of comparison, findings are reported by just two age phases: 5 to 11 years old -which is termed 'primary', and 11 to 18 years old -referred to as 'secondary'. Most of the participants taught pupils in the 11 to 18 age range.
Data collection methods
Qualitative data were apt to address our research question: 'How do the sources of stress compare for state and private school teachers in England?'. Interviews and focus groups were an appropriate data collection method because they afforded a rich insight into teachers' experiences of stress and allowed opportunity for teachers to develop their narratives. In addition, focus groups enabled teachers to compare their experiences and to identify amongst themselves which experiences and stressors were common to their sector. The semi-structured interviews and focus groups asked teachers questions concerning the best and worst aspects of their work, the areas of their work that they found stressful and questions concerning workloads and experiences of internal and external school monitoring. Example questions included: 'Can you tell me about the best/worst parts of your job?'; 'What, if anything, contributes to stress at work?' and 'Can you tell me about your
Data analysis
We organised our data using MAXQDA -a software for qualitative data analysis. In the first instance we explored our data for descriptions of 'stress' or 'stressors' in accordance to Kyriacou's (2001) definition of stress as the experience of 'unpleasant' or 'negative' emotions. Although the categories were adaptive, we had designed a preliminary coding framework based on previous research (Appendix 1). Prior research indicated that teachers might experience stress from workload, accountability/monitoring, pupil behaviour, parents, poor collegial relationships and time pressures -and these factors became our initial categories for coding. After conducting this initial coding, we were able to refine the codes and sub-codes. Next, as is recommended by a 'framework analysis' approach, we mapped overlap and interactions between the categories (Srivastava & Thomson, 2009). Following this review, we explored the data according to different variables (e.g. state/private; day/boarding and primary/secondary) in order to identify if there were any themes that were more dominant in one kind of school compared to another. After this, we began to interpret the findings in relation to the job demands-resources theory to better understand how and why in some contexts activities (such as marking) become stressors.
Findings and discussion
We found that state and private school participants identified different areas as key contributors towards stress. Private school participants emphasised parents as a source of stressful accountability, whereas the state school teachers typically emphasised burdensome workloads compounded by accountability-motivated school policies as their primary stressor. In addition, we found differences in the nature of the teachers' narratives. Some teachers in the state sector described their work as intensely stressful, whereas the private school teachers typically described milder experiences of stress. The private school participants also indicated that they felt well-resourced to meet the demands of their work by long holidays and appropriate levels of autonomy. When considering these overall findings, as previously outlined, it should be remembered that the private school sample was drawn from six schools -the operations of which do not necessarily typify other kinds of private schools, for example schools that are not affiliated with ISC.
Unmanageable workloads
For state school teachers working in both primary and secondary phase education, workload was identified as a main stressor, a finding that has been mirrored in previous research (Ofsted, 2019;Perryman & Calvert, 2020). Teachers spoke of their workloads as 'absolutely overwhelming', 'daunting', 'relentless' and others used the image of 'drowning' in work. Feelings of stress arose from both the volume of work and the nature of work. Some spoke of the number of books that they needed to mark, and this was particularly stressful for teachers when they felt that the marking did not help pupils. One primary school teacher, for example, stated that she forged her pupils' handwriting in exercise books to evidence that they had read and acted on feedback. She performed what Ball (2003) refers to as an act of 'fabrication' whereby she feigned evidence of pupil achievement in order to appease her school managers who routinely scrutinised her marking.
There were exceptions in the state school sample. Lesley, a state school secondary teacher, said that she did not 'have an issue with workload' which she credited to effective school policies that had enabled the 'removal of unnecessary marking'. She believed that her experience was not 'normal'. In a separate interview, Emma, a secondary school middle leader in a different school, relayed a similar experience to that of Lesley. Emma explained that she could 'manage [her] workload very well' because there were effective policies in place to eradicate burdensome workloads especially from marking. Lesley and Emma's experiences of manageable workloads were not typical of the state school sample; they may have worked in school contexts where managers were early adopters of the workload reduction strategies encouraged by the Department for Education.
Private school teachers also spoke of long working hours. Those in the surveyed day schools discussed 'intense' days and those in the boarding schools, or who had prior experience in boarding schools, characterised such working days as protracted. Boarding school teachers might supervise pupils into the evening or throughout the night, and teachers in both day and boarding schools explained that they were expected to participate in weekend work. Despite long and/or intense hours many spoke of finding working in a 'co-curricular' capacity as 'rewarding', 'satisfying' and 'enjoyable'. Significantly, many private sector participants spoke of holidays acting as 'compensation' for their long working weeks, whereas only one state school teacher explicitly identified holidays as a resource that equipped her to deal with the demands of her intense term-time work.
Despite long holidays that might mitigate the onset of high levels of stress, there were teachers from two different private schools who suggested that workload was stressful. Robert, a private secondary school classroom teacher, believed that there was a 'sink or swim' policy in his school and 'if you do sink, [management] are quite happy to replace you'. Katie and Madeline, both teachers in the same private day primary school, were participants in the same focus group as Robert. They also indicated that 'wellbeing' was not a 'priority' in their workplace and that staff were overtimetabled as the school was in financial difficulties. It is crucial to note that these more critical voices from the private sector emerged in a private school focus group which was held in a community centre with teachers from different schools. It may, therefore, have been the case that teachers in the other private school focus groups presented mainly positive accounts of school management while on school grounds, and with a larger group of colleagues.
Policies not pupils
Contrary to other studies, pupil behaviour did not emerge as a prevalent stressor in state school or private school data . There were some instances of teachers from the state sector reporting extreme incidents of pupil misconduct (one had been physically assaulted by a pupil), but such reports were exceptional. At the end of interviews and focus groups in which pupil behaviour had been absent from discussion, the interviewer commented on the matter and remarked that pupil behaviour had not been mentioned as a stressor. Navinder, a state secondary school teacher, explained that 'it's not the children' but 'these neverending tasks' that originate from 'leadership and management within schools' that make teachers' work stressful. As has been reported in other research, some participants in this study explained that support from school leaders and colleagues, or the benefit of teaching experience, helped them to effectively manage stress and enjoy their work with pupils (Gordon, 2020;Richards et al., 2018).
Private school teachers spoke occasionally of concerns about 'entitled' pupils being 'rude'. However, for the most part, behaviour was characterised as good, and several compared it favourably to the state sector. Katie (primary day school teacher) stated that her 'biggest problem' was whether or not 'pupils cross their legs' whereas she imagined that in the state sector 'most teachers get chairs thrown at them'. Although this projection was not reflected in the narratives of the participating state school teachers, it provides an illustration of some private school practitioners' perceptions of the relative stressors of each sector.
Pushy parents
Private school teachers identified parents, as opposed to pupils, as 'more of an issue at [private] schools'. Interviewees from different schools considered that parents applied 'pressure' and were 'very hands on'. Teachers explained that parents expected high academic outcomes (sometimes regardless of the pupil's interest in academic work), and that they demanded instant replies to emails, meetings at short notice and were inclined to ask for 'investigations' into teachers' conduct if their child was aggrieved by any kind of school sanction. Teachers interpreted these 'unpleasant' interactions as a consequence of the 'business client' nature of the relationship which positioned teachers as 'more answerable to the parents' compared to state school teachers who were more accountable to government. Several interviewees made comments of the type that parents were paying 'large sums of money' and thus held high expectations for what one teacher termed the end 'product'.
In contrast to this experience, participants from the state sector rarely noted parents as a direct source of stress. There was one cursory mention of a time-consuming parent who wanted daily reports on their child's progress, but beyond this the theme of difficult parents was largely absent from the data. It may have been missing from the data because participants were adequately resourced by school managers to cope with any potentially difficult relationships with parents. By way of example, one secondary classroom teacher in the state sector, Erin, noted that it was a strong point of her school that she was 'well supported' by management. Several others commented that they had good relationships with parents who were supportive of school. For other teachers, the stresses of parental demands and pupil misbehaviour may have been of relative unimportance compared to the 'overwhelming' workloads and intense scrutiny that they reported.
Work scrutiny
State school participants reflected on the sources of their stressful workloads. In some cases, they perceived that school leaders enacted burdensome policies that led to excessive workloads. Erin (secondary school classroom teacher) commented that she engaged with 'tedious' marking because she felt she needed to 'mark the books as if next week someone's going to look at them and scrutinise them and say, "oh, you didn't mark this"'. When discussing why school leaders might implement policies that led to burdensome workloads, participants attributed these accountability cultures to the wider demands of the schools' inspectorate (Ofsted). Rosalyn, a primary teacher, suggested that school leaders scrutinised classroom staff through fear of not being adequately prepared for an external inspection: [School leaders] are just frightened. I almost feel sorry for them [. . .] There is that fear, but it's that genuine fear of Ofsted because they can. . . and at times they are removed from their jobs.
The idea of high-stakes accountability being 'filtered down' to classroom teachers from school leaders (fearful of the consequences of a poor Ofsted report) was a prevalent theme in the data and has been reported elsewhere (Perryman, 2009). The finding mirrors the Department for Education's teacher recruitment and retention strategy that attributes burdensome workloads in part to school accountability cultures prompted by a desire to prepare teachers for high-stakes Ofsted inspections (Department for Education, 2019b).
There were different views on the ways in which accountability manifests for private school teachers. Some emphasised that parents were the primary agents of accountability and intimated that they were free from unnecessary scrutiny from school managers. Others reported that school monitoring policies (e.g. book marking inspections and unannounced classroom inspections) were sources of stress. Rupert, a private secondary day school classroom teacher, stated that he had 'an acceptable, manageable level of work' and he linked this to his autonomy. He enjoyed being 'a master of [his] own domain' and the 'autonomy of the role' that was 'free from outside inference'. He clarified that he felt free from unnecessary intra-school scrutiny: 'no one's saying, "You must teach this in this lesson, in this way, have you done this yet? Where's the evidence of this?"'. He continued to suggest that managers were not 'coming into the lesson and asking, "What are [the teachers] doing? Why are they doing it?"'. This feeling of autonomy appeared to resource Rupert to meet the demands of his work and to experience it not only as manageable but also as enjoyable.
While Rupert's view was shared across some participants from other schools, Katie and Madeline who worked in the same private primary school felt that their school managers implemented unnecessary levels of scrutiny. Katie described her school's lesson monitoring policy as 'overkill' and 'annoying' and Madeline detailed a stressful experience of observation whereby the observer 'burst into [her] room' and provided feedback that she did not find 'particularly helpful'. She perceived the inspection and feedback was offered only for the purpose of 'ticking boxes' for an imagined future inspection from the Independent Schools Inspectorate (a body overseen by Ofsted that is licensed to inspect some private schools in England). When asked why they thought their school might have implemented such monitoring practices (specifically unannounced inspections), Katie posited that her school's managers were 'very conscious of what the state sector [is] doing, and they pick up on these buzzwords' believing that adopting the latest trends would be 'mega for the business'. Robert, a secondary school teacher from a different private school, and a participant in the same focus group conversation, suggested that ineffective monitoring policies were 'like some kind of virus' spreading across the sector divide. These comments, when compared to Rupert's experiences, indicate that the conditions in the private sector may vary considerably from teacher to teacher and school to school, and some schools may be closer in their policy approach to the state sector than others. In Katie and Robert's views, this perceived mirroring of state school procedures was not desirable and led to stressful work experiences.
Stress intensity
Although there was clear variation in the narratives of the private school teachers, it was noticeable that their descriptions of the pressures that they felt were less intense compared to those of the state school participants. State sector narratives included descriptions of teaching as a highly stressful profession that left teachers struggling with long lasting negative psychological and emotional effects. Many of these descriptions of profoundly stressful work related to the volume and nature of work experienced by teaching staff. When asked about workload, Jenny, a state primary school classroom teacher, stated that 'most days it feels overwhelming, to be honest, but that's the way it is I think'. Jenny's comment reflected a resignation that 'overwhelming' workload was a condition of her profession. Faced with managing the extreme demands of her work, it seemed that she could not conceive of any possible resources that could mitigate the effects. Similarly, Navinder (state secondary school classroom teacher) commented that 'the workload is ever increasing and there just seems to be no end in sight'. His comment similarly revealed that he was ill-resourced to manage the flow of endless and 'increasing' work.
Another secondary state school respondent, Alison, detailed how she had looked to leave the profession to assume a position in a supermarket because she found her long working hours to be 'soul destroying'. She explained that her heavy timetable had led her to lose all her 'creativity and enthusiasm', a comment that resounded with another primary teacher (Rosalyn) who spoke of teachers 'broken' by their workloads. In Rosalyn's case, she had opted to assume part-time hours to balance the demands of her job with other aspects of her life.
Although private school teacher Robert spoke of his school's managers having a 'sink or swim' attitude to staff's ability to manage stress, many others from the sector spoke of workloads that were 'acceptable', 'manageable' or intense but mitigated by long holidays. Others, particularly those in boarding schools, pointed to the physical environment as a resource that helped them manage stressful workdays. They perceived themselves as lucky to enjoy 'beautiful' views and surroundings.
The finding that state school teachers experienced their work as more intensely stressful compared to the sampled private school teachers becomes significant when the link between teacher stress and pupil outcomes is considered. The benefits of private education for pupils are welldocumented. Compared to state school pupils, the privately educated can expect a pay premium throughout their working lives and are much more likely to attend elite universities and enter highstatus professions (Green et al., 2018;Macmillan et al., 2015). If state school pupils are taught by teachers with high levels of stress and poor wellbeing, and these negative teacher states are linked to pupil outcomes (McLean & Connor, 2015;Ramberg et al., 2020) -this could further compound the disadvantage associated with being state-educated.
Conclusion
From the research we concluded that the teachers in our sample experienced similar sources of stress, although most private school participants articulated the experience of this stress as less intense compared to those working in state schools. This finding supports other research which has tentatively indicated that private sector teachers are more satisfied with their work compared to those in the state sector (Micklewright et al., 2014). We found that although state school teachers enjoyed their work with the pupils they served, they were fatigued, exhausted and 'broken' by burdensome workloads. Other work has found that such conditions contribute to employees' decisions to leave state school teaching, and for those that remain in service, poor wellbeing and high stress are negatively correlated with pupil outcomes (CooperGibson, 2018;McLean & Connor, 2015). With these considerations in mind, it is important to address the sources of excessive stress for state school practitioners, and consider how best to resource these much needed staff.
Some of the resources available to private school teachers -and particularly boarding school teachers -such as attractive physical environments and longer school holidays were specific to the sector and could not be easily introduced in the state sector. However, there was some evidence to suggest that teachers in both sectors might be resourced to manage stress through appropriate levels of autonomy. Indeed, those who depicted themselves as adequately autonomous and free from unhelpful intra-school scrutiny spoke of high work-enthusiasm and engagement. Other research indicates that school leaders can support autonomy through the cultivation of supportive communities that encourage and enable sharing of practice, reflective dialogue, collective responsibility and the cultivation of common values (Stoll et al., 2006;Valckx et al., 2020). Strong supervisory support and bespoke mentoring can further support teachers manage the demands of their work (Bakker et al., 2007;Gordon, 2020).
Participating state school teachers indicated that stress from heavy workloads resulting from accountability-motivated school policies overwhelmed other factors of potential stress such as relationships with parents, colleagues or pupil behaviour. This finding accords with recent research into teacher retention in England that emphasises workload as a core problem (CooperGibson, 2018;Perryman & Calvert, 2020). We consider that school leaders in the state sector might usefully continue to address teacher workload and review teacher monitoring policies and practices in relation to staff stress. There was some limited evidence in our study of state school teachers who worked in contexts where workload management strategies had been effectively implemented, and these teachers described improved workload stress compared to peers. Existing case studies and guidance on effective workload-reduction strategies (such as workload reduction toolkits) could provide a practical and sector-appropriate source of help for school leaders interested in improving workload quantity and quality in their schools (Department for Education, 2018b; Teacher Workload Review Group, 2016). Allen and Sims's (2018) work makes a further series of practical recommendations for addressing burdensome workloads. One such recommendation is that schools restrict working hours each day in order to identify which tasks teachers prioritise as essential to the functioning of the school, and which tasks (and associated policies) are non-essential and thus can be discarded.
In the private sector, our small sample indicated that some contexts might benefit from a similar workload and accountability policy review, although a focus on supporting staff's capacity to navigate difficulty communications with parents might prove more pertinent. While this study is limited in its capacity to identify practical strategies for teachers and school leaders to manage stressful encounters or complaints from fee-paying parents, this is an area that could benefit from further sector-specific investigation.
As a concluding comment, we found evidence of greater variation within the private sector than we were able to report within the parameters of this paper. There remain several future avenues for exploration including comparative studies of the different sources of stress for private day school teachers compared to those in boarding schools, for example. In addition to this, there is potential for a more granular understanding of the way in which teacher stress affects pupils' educational experiences across and within sectors. Further qualitative study of the association between teacher stress and pupils' school experience might yield valuable findings for school and sector leaders looking to better understand interactions within the school ecosystem.
Funding
The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: Jude Brady was funded by a Pigott Studentship for the duration of this study. | 2021-09-01T15:11:03.104Z | 2021-06-23T00:00:00.000 | {
"year": 2021,
"sha1": "9de370edd93c64a540b8503d0bd703b8bae2d9fe",
"oa_license": "CCBYNC",
"oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/13654802211024758",
"oa_status": "HYBRID",
"pdf_src": "Sage",
"pdf_hash": "ee603e2a1e95f6c85ff7e7c4fe2642cfbfddccd5",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Psychology"
]
} |
248979314 | pes2o/s2orc | v3-fos-license | Lentils and Yeast Fibers: A New Strategy to Mitigate Enterotoxigenic Escherichia coli (ETEC) Strain H10407 Virulence?
Dietary fibers exhibit well-known beneficial effects on human health, but their anti-infectious properties against enteric pathogens have been poorly investigated. Enterotoxigenic Escherichia coli (ETEC) is a major food-borne pathogen that causes acute traveler’s diarrhea. Its virulence traits mainly rely on adhesion to an epithelial surface, mucus degradation, and the secretion of two enterotoxins associated with intestinal inflammation. With the increasing burden of antibiotic resistance worldwide, there is an imperious need to develop novel alternative strategies to control ETEC infections. This study aimed to investigate, using complementary in vitro approaches, the inhibitory potential of two dietary-fiber-containing products (a lentil extract and yeast cell walls) against the human ETEC reference strain H10407. We showed that the lentil extract decreased toxin production in a dose-dependent manner, reduced pro-inflammatory interleukin-8 production, and modulated mucus-related gene induction in ETEC-infected mucus-secreting intestinal cells. We also report that the yeast product reduced ETEC adhesion to mucin and Caco-2/HT29-MTX cells. Both fiber-containing products strengthened intestinal barrier function and modulated toxin-related gene expression. In a complex human gut microbial background, both products did not elicit a significant effect on ETEC colonization. These pioneering data demonstrate the promising role of dietary fibers in controlling different stages of the ETEC infection process.
Introduction
The food-and water-borne enterotoxigenic Escherichia coli (ETEC) is the primary agent responsible for travelers' diarrhea, with hundreds of millions of diarrheal episodes worldwide [1]. The site of action for ETEC is mostly localized in the distal part of the human small intestine [2][3][4]. There, a myriad of virulence factors support its infectious cycle [5,6]. The mucus-degrading proteins (YghJ and EatA) and adhesins (such as FimH and Tia) facilitate ETEC's access to the epithelial brush border and promote ETEC attachment, respectively [7,8]. Then, ETEC's close proximity to the intestinal epithelium favors the action of the heat-labile (LT) and/or heat-stable (ST) toxins. These enterotoxins trigger water
Growth Kinetics Assays in Broth Media
ETEC strain H10407 (initial concentration of 10 7 CFU·mL −1 ) was allowed to grow aerobically (37 • C, 100 rpm, 5 h,) in complete LB or M9 minimal media (Sigma, St. Louis, MO, USA) with or without each fiber-containing product (2 g·L −1 ). The media were regularly sampled and plated onto LB agar for ETEC numeration (n = 3).
LT Toxin Measurement in Broth Media
The LT concentration was assayed by cultivating ETEC strain H10407 in Casamino acids-yeast extract (CAYE) medium (37 • C, 100 rpm) with or without fiber-containing products at various concentrations (ranging from 0.0625 to 8 g·L −1 ). After overnight culture, the medium was centrifuged (3000× g, 5 min, 4 • C), and toxin concentrations were measured in the supernatant by GM-1 ELISA assay, as previously described [19]. Pure LT toxin detection inhibition assays were also carried out as aforementioned with pure LT-Cholera B toxin sub-unit (Sigma-Aldrich, Saint-Louis, MO, USA) added in CAYE medium at a concentration of 500 ng·mL −1 without ETEC bacteria. The absence of a negative effect of various doses of fiber-containing products on ETEC growth was verified by plating on LB agar plates at the end of the LT experiments. Three independent biological replicates were performed for each assay.
Mucin Bead Adhesion Assays
Mucin-alginate beads were obtained as already described [41]. Briefly, the mixture containing 5% (w/v) porcine gastric mucin type III and 2% (w/v) sodium alginate (Sigma-Aldrich, Saint-Louis, MO, USA) was dropped using a peristaltic pump into a sterile solution of 0.2 M CaCl 2 under agitation (100 rpm). The beads were stored at 4 • C for no more than 24 h prior to experiments. For yeast-alginate beads, mucin was replaced by the specific yeast cell wall product at the same concentration (5% w/v). Adhesion assays on beads were carried out as follows: ETEC was inoculated at a dose of 10 7 CFU·mL −1 and allowed to adhere during a 1 h contact period. At the end of the experiment, beads were washed three times with ice-cold sterile physiological water and crushed with an Ultra-Turrax apparatus (IKA, Staufen, Germany). The resulting suspensions were then serially diluted and plated onto LB agar plates for ETEC numeration ("adhered" cells). In order to test adhesion inhibition by mannose residues, D-mannose (Sigma-Aldrich, Saint-Louis, MO, USA) was added at a final concentration of 10 g·L −1 to the medium prior to ETEC inoculation. Three independent biological replicates were performed.
Caco-2 and HT29-MTX Cell Culture Assays
Caco-2 and HT29-MTX cells were cultivated as already reported [19]. The Caco-2/HT29-MTX co-culture (ratio 70:30) was maintained for 18 days to reach the full differentiation stage [42]. Cells were pre-treated or not with fiber-containing products (2 g·L −1 ) for a 3 h period. The cells were then infected with ETEC strain H10407 at a multiplicity of infection (MOI) of 100 for 3 additional hours (37 • C, 5% CO 2 ) in antibiotic/antimycotic-free medium. At the end of experiment, to monitor ETEC "planktonic" bacteria cells, culture medium was collected and centrifuged (3000× g, 5 min, 4 • C). The resulting pellet was kept in RNA later (Invitrogen, Waltham, MA, USA) at −20 • C for downstream RNA extraction and RT-qPCR analysis of ETEC virulence genes. To monitor ETEC "adhered" bacteria, cell layers were washed three times with ice-cold PBS (ThermoFisher, Waltham, MA, USA). In a first set of experiments, Caco-2/HT29-MTX cells were lysed with 1% Triton X-100 (Sigma-Aldrich, Saint-Louis, MO, USA). Cell lysates were plated onto LB agar to determine the number of ETEC bacteria adhered to the cells or further centrifuged (3000× g, 5 min, 4 • C). The resulting supernatant was used to measure the intracellular pro-inflammatory IL-8 levels, while pellet cells were stored in RNA later (Invitrogen, Waltham, MA, USA) at −20 • C for further prokaryote RNA extraction. In a second set of experiments, RNAs from adhered bacteria were extracted for eukaryotic gene expression analysis (ETEC virulence genes). Control experiments were also performed with non-infected Caco-2/HT29-MTX cells and in the DMEM medium devoid of intestinal cells for virulence gene expression analysis. The impact of both ETEC strain H10407 and fiber-containing products on intestinal cell viability was controlled during a 3 h time course using a Trypan blue exclusion assay. For each set of experiments, at least three independent biological replicates were performed.
Measurement of Caco-2/HT29-MTX Permeability on Transwells
For permeability experiments, Caco-2/HT29-MTX cells were rinsed with PBS and incubated with an apical concentration of caffeine (1 g·L −1 ) or atenolol (50 mg·L −1 ) in fresh DMEM medium with or without dietary-fiber-containing products (2 g·L −1 ). The medium was collected after 2 h of incubation at both the apical and basolateral sides of the transwells. The caffeine and atenolol concentrations were determined by HPLC (Elite LaChrom, Merck HITACHI, USA) using an Onyx™ Monolithic C18 LC column of 100 × 4.6 mm at 20 • C (Phenomenex, Torrance, CA, USA) and an Interchimm C18 column of 250 × 4.6 mm at 40 • C (Interchim, Montluçon, France), respectively. The mobile phase was composed of acetonitrile/pH 6.5 PBS (10:90, v/v) and acetonitrile/water (20:80, v/v) with 10 mM ammonium acetate for caffeine and atenolol, respectively. Data were obtained and analyzed by the EZChrom Elite software at 235 and 275 nm for caffeine and atenolol, respectively. The caffeine and atenolol concentrations were calculated from standard curves established from known serial dilutions of each compound. The molecular Nutrients 2022, 14, 2146 5 of 34 absorption was defined as the percentage of basal molecules/total molecules introduced. The transepithelial resistance (TEER) was measured regularly during the time course of the experiment (total duration = 3 h) with a volt/ohmmeter (World Precision Instruments, Hessen, Germany). Three independent biological replicates were performed.
RNA Extractions
Eukaryotic RNAs were extracted with the RNeasy Plus Mini Kit (Qiagen, Germany). Total bacterial RNAs were extracted using the TRIzol ® method (Invitrogen, Waltham, MA, USA), as already described [43], with an additional purification step with a MinElute Cleanup Kit (Qiagen, Hilden, Deutschland). The nucleic acid purity was checked and RNA was quantified using the NanoDrop ND-1000 (Thermo Fisher Scientific, Waltham, MA, USA). To remove any contamination by genomic DNA, a DNAse treatment was performed [43].
2.9. Quantitative Reverse Transcription (RT-qPCR) Analysis of ETEC Virulence Genes cDNA amplification was achieved using a CFX96 apparatus (Bio-Rad, Hercules, CA, USA), and q-PCR was performed using the primers listed in Table 2. qPCR data were analyzed using the comparative E −∆∆Ct method and were normalized with the reference genes tufA and ihfB. The amplification efficiency of each primer pair was controlled from the slope of the standard curves (E = 10 (−1/slope) -1), based on a serial dilution of a pool of three ETEC cDNA samples. Differences in the relative expression levels of each virulence gene were calculated as follows: ∆∆Ct = (Ct target gene − Ct reference gene ) in the tested condition -(Ct target gene − Ct reference gene ) in the reference condition , and data were derived from E −∆∆Ct . Table 2. The expression of host genes related to mucin synthesis, tight junction proteins, and inflammation were investigated. The data were analyzed with SDS 2.3 software (Thermo Fisher Scientific, Waltham, MA, USA) using the comparative 2 −∆∆Ct method and were normalized with the reference genes GAPDH, HPRT, and PPIA. The amplification efficiency of each primer pair was controlled from the slope of the standard curves (E = 10 (−1/slope) -1) based on a serial dilution of a pool of six RNA samples from the experiments.
Measurement of Interleukin-8 by ELISA
Pro-inflammatory IL-8 cytokine concentrations were determined in cell lysates from the Caco-2/HT29-MTX co-culture experiments according to the manufacturer's instructions (DuoSet ELISA, human CXCL8/IL-8, RnD Systems, Minneapolis, MN, USA). The results were expressed as fold changes compared to control experiments performed without ETEC (non-infected) or fiber-containing products (non-treated).
Batch Experiments
Batch experiments were carried out for 24 h in 60 mL penicillin bottles containing 20 mL of nutrient medium and 60 mucin-alginate beads. Each liter of medium was composed of: 1 g of potato starch, 1 g of yeast extract, 1 g of proteose peptone, and 1 g of type III pig gastric mucin (all from Sigma Aldrich, St. Louis, MO, US) suspended into 0.1 M phosphate buffer (pH 6.8) and autoclaved before use. The lentil extract and yeast cell wall products were added at a final fiber concentration of 2 g·L −1 . In the control condition with no dietary-fiber-containing product (non-treated), the composition of the nutritive medium was compensated by the addition of 0.5 g of guar gum, 1 g of pectin, and 0.5 g of xylan (same total fiber concentration).
To examine the inter-individual variability of ETEC interactions with dietary-fibercontaining products and human gut microbiota, the experiments were replicated with fecal samples from six healthy individuals. These donors were three males (donors 1, 2, and 3) and three females (donors 4, 5, and 6), ranging in age from 20 to 30 years, without a history of antibiotic use six months prior to the study. Consent for fecal collection was obtained under registration number BE670201836318 (Gent University). The fecal collection and fecal slurry preparation were performed as previously described [61]. An inoculation at a 1:5 dilution of the 20% (w/v) fecal slurry resulted in a final concentration of 4% (w/v) fecal inoculum in the penicillin bottles. To reproduce stresses that the pathogens endure during transit in the stomach and small intestine in humans, ETEC strain H10407 was pre-digested using a simple static gastrointestinal procedure, as described in Table 3 and already described [43]. ETEC was inoculated at the final concentration of 10 8 CFU·mL −1 . The penicillin bottles were flushed with N 2 /CO 2 (80%/20%) during 20 cycles to obtain anaerobic conditions. The cycle was stopped at overpressure, and before the start of the experiment, the bottles were set at atmospheric pressure. The penicillin bottles were incubated (37 • C, 120 rpm) on a KS 4000i orbital shaker (IKA, Staufen, Germany) and aliquots were taken immediately after the start of the incubation (T0) and at 24 h of fermentation (T24h) from the liquid and atmospheric phases. Mucin-alginate beads were collected 24 h post-inoculation and were washed twice in ice-cold physiological buffer before storage. All aliquots were immediately stored at −20 • C, except samples for flow cytometry that were fixed before storage. Table 3. Static in vitro gastro-ileal digestion procedure. A static batch incubation (Erlenmeyer) was used to reproduce the physicochemical parameters of gastro-ileal digestion. Digestive secretions and solutions for pH adjustment were manually added during the 90 min digestion.
Parameters of Static In Vitro Digestion
Gastric Vessel Duodenum-Ileum Vessels pH from 6 (T0) to 2.1 maintained at 6.8 Volume (mL) 50
Gut Microbiota Metabolite Analysis
Short chain fatty acid (SCFA) production was measured using capillary gas chromatography coupled to a flame ionization detector after diethyl ether extraction as previously described [61,62]. The gas phase composition was analyzed with a Compact gas chromatograph (Global Analyser Solutions, Breda, The Netherlands) equipped with a Molsieve 5A pre-column and Porabond column (CH 4 , O 2 , H 2, and N 2 ) or an Rt-Q-bond pre-column and column (CO 2 ). The concentrations of gases were determined with a thermal conductivity detector. The total pressure in the penicillin bottles was analyzed using a tensiometer (Greisinger, Regenstauf, Germany).
DNA Extraction
DNA extraction and quality controls were performed from samples collected at T0 and T24h during batch experiments as previously described [61,63]. The DNA quality and quantity were verified by electrophoresis on a 1.5% (w/v) agarose gel and an analysis on a spectrophotometer DENOVIX ds-11 (Denovix, Wilmington, DE, USA).
ETEC Quantification by qPCR
qPCR was performed using a StepOnePlus real-time PCR system (Applied Biosystems, Waltham, MA, USA). The reactions were conducted in a total volume of 20 µL, consisting of 10 µL of 2× iTaq universal SYBR Green supermix (Bio-Rad Laboratories, Hercules, CA, USA), 2 µL of DNA template, 0.8 µL (10 µM stock) of each primer, and 6.4 µL of nuclease-free water. The primers used for ETEC quantification are listed in Table 2. The data were analyzed using the comparative E −∆∆Ct method. The amplification efficiency of the primer pairs was determined by the generation of a standard curve based on the serial dilution of five ETEC-infected samples. Differences in the number of copies of the eltB gene were calculated as follows: ∆∆Ct = (Ct target gene -Ct reference gene ) sample of interest -(Ct target gene -Ct reference gene ) reference sample , and data were derived from E -∆∆Ct . All qPCR analyses were conducted in triplicate.
ETEC Quantification by RNA Fluorescent In Situ Hybridization
Flow cytometry samples were fixed and prepared for RNA fluorescent in situ hybridization, as already described [64]. Cells were hybridized in 100 µL of hybridization buffer for 3 h at 46 • C. The hybridization buffer consisted of 900 mmol·L −1 NaCl, 20 mmol·L −1 Tris-HCl (pH 7.2), 0.01% sodium dodecyl sulfate, 20% deionized formamide, and 5 mM EDTA. The buffer also contained the two E. coli-targeting probes at a final concentration of 2 ng·µL −1 and a combination of probes targeting eubacteria at a final concentration of 1 ng·µL −1 each ( Table 2). After hybridization, the samples were washed with wash buffer (900 mmol.L −1 NaCl, 20 mmol.L −1 Tris-HCl pH 7.2, 0.01% sodium dodecyl sulfate) for 15 min at 48 • C. After washing, the cells were resuspended in 50 µL of PBS. The samples were diluted and stained with SYBR ® Green I (100× concentrate in 0.22 µm filtered dimethyl sulfoxide, Invitrogen) and incubated for 20 min at 37 • C. The samples were analyzed immediately after incubation with an Attune NxT BRXX flow cytometer (Thermo Fisher Scientific, Waltham, MA, USA). The flow cytometer was operated with Attune™ Focusing Fluid as the sheath fluid. The threshold was set on the primary emission channel of blue lasers (488 nm). The Attune Cytometric Software was used to draw the gates, and the percentage of active E. coli in the total bacteria population was expressed as the number of cells showing the E. coli probe fluorescence out of the number of cells fluorescently labeled with the Eubacteria probes and SYBR green fluorescence.
16S Metabarcoding Analysis of Gut Microbial Communities
Next-generation 16S rRNA gene amplicon sequencing of the V3-V4 region was performed by LGC Genomics (Berlin, Germany) on an Illumina MiSeq platform (Illumina, San Diego, CA, USA), as previously described [61], except that the luminal and mucosal samples had undergone 30 and 33 amplification cycles, respectively.
All data analysis was performed in R (4.1.2). The DADA2 R package was used to process the amplicon sequence data according to the pipeline tutorial [65]. In a first quality control step, the primer sequences were removed, and reads were truncated at a quality score cut-off (truncQ = 2). Besides trimming, additional filtering was performed to eliminate reads containing any ambiguous base calls or reads with a high number of expected errors (maxEE = 2.2). After dereplication, the unique reads were further denoised using the DADA error estimation algorithm and the selfConsist sample inference algorithm (with option pooling = TRUE). The obtained error rates were further inspected, and after approval, the denoised reads were merged. Subsequently, the ASV table obtained after chimera removal was used for taxonomy assignment using the Naive Bayesian classifier, and the DADA2-formatted Silva v138 ASVs mapping back to anything other than 'Bacteria' as well as singletons were excluded and considered to be technical noise [66].
Statistical Analysis
All statistical analyses, except the one conducted on the microbiota diversity composition results, were performed using GraphPad Prism v8.0.1. The statistical data analysis on microbiota diversity was performed using R, version 4.1.2 (R Core Team, 2016), using the statistical packages Phyloseq (v1.38) [67] for ASV data handling, vegan v2.5.7 [68], betapart v 1.5.4 for diversity analysis of ASV's [69] and deseq2 v1.34 [70] for the significant higher/lower abundance of ASVs. The evolution of the microbial community α-diversity between conditions was followed by computing the richness (observed ASV) and evenness indexes (Shannon, Simpson, inverse Simpson, and Fisher) using vegan. To highlight the differences in microbial community composition between conditions, ordination and clustering techniques were applied and visualized with ggplot2 (v3.3.5) [71]. Non-metric multidimensional scaling (NMDS) was based on the relative-abundance-based Bray-Curtis dissimilarity matrix [72]. The influence of ETEC infection and the type of beads used was determined by applying a distance-based redundancy analysis (db-RDA) using the abundance-based Bray-Curtis distance as a response variable [71,73]. db-RDA was performed both including and excluding ASV1 (attributed to Escherichia/Shigella) from the ASV table. The significance of group separation between conditions was also assessed with a permutational multivariate analysis of variance (permANOVA) using distance matrixes [71]. Prior to this formal hypothesis testing, the assumption of similar multivariate dispersions was evaluated. In order to find statistically significant differences in ASV abundance between the infected and non-infected conditions, a Wald test (corrected for multiple testing using the Benjamini and Hochberg method) was applied using the DESeq2 package. The metabolic response (measured SCFA and pH) was modeled in the function of the beads and infection conditions in a db-RDA analysis.
Fiber-Containing Products Do Not Impede ETEC Growth in Complete Culture Medium
When ETEC strain H10407 was grown in an LB-rich medium ( Figure 1A), no statistical difference was observed between the conditions supplemented with the lentil extract ('lentils') and the specific yeast cell walls ('yeast') compared to the negative control ('nontreated'). Therefore, neither of the two fiber-containing products were able to impede ETEC growth in a nutrient-rich culture medium. In M9 minimal medium ( Figure 1B), both products were able to sustain ETEC growth compared to the non-treated condition, leading to an almost 2-log difference with the control condition after 5 h of incubation. This overgrowth became statistically different at 240 and 300 min according to Dunnett's multiple comparisons test (p < 0.05).
The Lentil Extract Decreases LT Toxin Concentrations in a Dose-Dependent Manner
Irrespective of the dose tested, specific yeast walls had no effect on LT toxin concentrations ( Figure 2A). In contrast, the lentil extract significantly decreased LT toxin concentrations in a clear dose-dependent manner ( Figure 2B). This inhibitory effect was significant, starting at the dose of 0.065 g·L −1 (1.64-fold decrease, p < 0.05). LT toxin was no longer detected when the lentil concentration exceeded 1 g·L −1 . To further investigate the possible mechanism of inhibition, we incubated the pure B sub-unit of the LT toxin at 500 ng.mL −1 with various doses of the lentil extract in the absence of ETEC ( Figure 2C). The lentil extract tended to inhibit LT toxin detection by the GM1-ELISA assay in a dose-dependent manner. At the highest fiber dose tested (8.0 g·L −1 ), the LT concentrations were 36-fold lower (6.0 ± 9.1 ng.mL −1 ) compared to the lowest dose (0.0625 g·L −1 , 214.8 ± 158.9 ng·mL −1 , p = 0.08). Finally, we verified that the lentil extract had no effect on ETEC growth in the CAYE medium during the LT assays ( Figure 2D).
Yeast Cell Walls Inhibit ETEC Adhesion to Mucin and Mucus-Secreting Intestinal Cells
First, the absence of a deleterious effect of both ETEC strain H10407 and fibercontaining products on intestinal cell viability was confirmed ( Figure S1 in Supplementary Materials). The lentil extract and yeast cell walls were able to significantly reduce ETEC adhesion to mucin-alginate beads by about 6-and 3-fold, respectively ( Figure 3A, p < 0.05). Compared to the control condition, the yeast cell walls reduced the number of adhered ETEC bacteria to Caco-2/HT29-MTX cells by nearly one log compared to the non-treated condition (p < 0.001, Figure 3B). Additional experiments were performed with yeastalginate beads to challenge ETEC's affinity for yeast cell walls ( Figure 3C). ETEC adhesion on yeast-alginate beads was significantly increased compared to mucin-alginate beads (nearly a one-log increase, p < 0.01). The addition of mannose at 10 g·L −1 in the medium did not affect ETEC adhesion on yeast-alginate beads (non-significant 33% inhibition, p > 0.05), while it had a significant impact on the number of adherent bacteria on mucin-alginate beads (64% inhibition, p < 0.01).
The Lentil Extract Decreases LT Toxin Concentrations in a Dose-Dependent Manner
Irrespective of the dose tested, specific yeast walls had no effect on LT toxin concentrations ( Figure 2A). In contrast, the lentil extract significantly decreased LT toxin concentrations in a clear dose-dependent manner ( Figure 2B). This inhibitory effect was significant, starting at the dose of 0.065 g.L −1 (1.64-fold decrease, p < 0.05). LT toxin was no longer detected when the lentil concentration exceeded 1 g.L −1 . To further investigate the possible mechanism of inhibition, we incubated the pure B sub-unit of the LT toxin at 500 ng.mL −1 with various doses of the lentil extract in the absence of ETEC ( Figure 2C). The lentil extract tended to inhibit LT toxin detection by the GM1-ELISA assay in a dose-dependent manner. At the highest fiber dose tested (8.0 g.L −1 ), the LT concentrations were 36-fold lower (6.0 ± 9.1 ng.mL −1 ) compared to the lowest dose (0.0625 g.L −1 , 214.8 ± 158.9 ng.mL −1 , p = 0.08). Finally, we verified that the lentil extract had no effect on ETEC growth in the CAYE medium during the LT assays ( Figure 2D).
Both Fiber-Containing Products Modulate ETEC Toxin-Related Virulence Gene Expression
The impact of the fiber-containing products on ETEC strain H10407 virulence genes was analyzed using two different experimental set-ups: with Caco-2/HT29-MTX cells ( Figure 4A) or in the DMEM medium devoid of intestinal cells ( Figure 4B). Overall, the lentil extract and specific yeast cell walls both had a strong effect on the virulence gene expression of planktonic ETEC bacteria (i.e non-adhered) whether in the presence or absence of intestinal cells. Interestingly, the lentil extract upregulated the expression of fimH adhesin (5.3-to 8.6-fold) and YghJ mucinase (2.3-to 10.3-fold) genes ( Figure 4A,B) while also downregulating the expression of the two toxin genes eltB and estp as well as tolC, which participates in ST toxin secretion and the rpoS gene involved in environmental stress responses. The presence of intestinal cells did not impact the modulatory effect of lentils towards ETEC gene expression. The yeast cell walls increased the expression of the two adhesins, fimH and tia, as well as the genes involved in LT toxin production and secretion, eltB and leoA, from 1.32-to 4.47-fold, depending on the genes ( Figure 4A, B). In the non-treated conditions, cell adhesion increased virulence gene expression, as reported by the fimH, eltB, and estP respective 5.5-, 2.3-, and 3.0-fold increases (p < 0.05, Figure 4A). Compared to planktonic bacteria, the modulation of adhered bacteria virulence by dietaryfiber-containing products was more subtle ( Figure 4A). The two compounds reduced eltB and estP toxin gene induction to a maximum of 1.7-fold compared to the non-treated control ( Figure 4A). In particular, yeast walls significantly reduced estP gene induction in adhered bacteria by 90% (p < 0.05). In contrast, none of the fiber products succeeded in reducing the 5-fold fimH induction by cell adhesion (Figure 4A), with a slight promoting effect for yeast cell walls (1.28-fold increase, p < 0.05). Lastly, both the lentil extract and yeast walls tended to reduce the environmental stresses encountered by adhered ETEC, as reported by the respective 60% and 70% decreases in rpos expression ( Figure 4A).
Yeast Cell Walls Inhibit ETEC Adhesion to Mucin and Mucus-Secreting Intestinal Cells
First, the absence of a deleterious effect of both ETEC strain H10407 and fiber-containing products on intestinal cell viability was confirmed ( Figure S1 in Supplementary Materials). The lentil extract and yeast cell walls were able to significantly reduce ETEC adhesion to mucin-alginate beads by about 6-and 3-fold, respectively ( Figure 3A, p < 0.05). Compared to the control condition, the yeast cell walls reduced the number of adhered ETEC bacteria to Caco-2/HT29-MTX cells by nearly one log compared to the nontreated condition (p < 0.001, Figure 3B). Additional experiments were performed with yeast-alginate beads to challenge ETEC's affinity for yeast cell walls ( Figure 3C). ETEC
The lentil Extract Limits ETEC-Induced Inflammation
Host innate immune response-related genes (cytokines) were selected and analyzed during the Caco-2/HT29-MTX experiments. ETEC infection of intestinal cells triggered the expression of all cytokine genes, as reported by the respective 65-, 5-, 63-, 244-, and 2-fold increases in TNF-α, IL-1β, IL-6, IL-8, and IL-10 expression (p < 0.05, Figure 5). The lentil extract tended to reduce the induction of all of these genes, with significance reached for IL-1β, IL-6, and IL-10 (p < 0.05), with decreases of 52, 52, and 41%, respectively ( Figure 5A, 5C, and 5D). The results were more mitigated with the specific yeast cell walls, which only reduced IL-10 expression (p < 0.01) ( Figure 5A). We further analyzed the IL-8 concentration to assess the impact of fiber-containing products on cytokine induction at the protein level. As expected, ETEC inoculation induced a significant (p < 0.001) 1.6-fold increase in intracellular IL-8 production ( Figure 5F). Both the yeast walls and lentil extract were able to significantly decrease IL-8 intracellular production under non-infected conditions (p < 0.05). In the infected condition, the protective effect was mostly preserved for lentils (p < 0.001), with relative IL-8 levels comparable to the control condition without any fiber or bacteria (0.85 ± 0.07 versus 1.00 ± 0.12), while the results obtained with yeast walls almost reached significance (p = 0.06). adhesion on yeast-alginate beads was significantly increased compared to mucin-alginate beads (nearly a one-log increase, p < 0.01). The addition of mannose at 10 g.L −1 in the medium did not affect ETEC adhesion on yeast-alginate beads (non-significant 33% inhibition, p > 0.05), while it had a significant impact on the number of adherent bacteria on mucin-alginate beads (64% inhibition, p < 0.01).
Both Fiber-Containing Products Modulate ETEC Toxin-Related Virulence Gene Expression
The impact of the fiber-containing products on ETEC strain H10407 virulence genes was analyzed using two different experimental set-ups: with Caco-2/HT29-MTX cells ( Figure 4A) or in the DMEM medium devoid of intestinal cells ( Figure 4B). Overall, the
The Lentil Extract Modulates ETEC Induction of Mucus-Related Gene Expression
Furthermore, mucus-related gene expression was assayed as a witness of the innate effector response. Inoculation with ETEC strain H10407 tended to induce all selected genes except TTF3 (Figure 6). This induction was significant (p < 0.05) for MUC17 (3-fold) and KLF4 (2-fold) only. The lentil extract tended to mitigate the ETEC induction of MUC1, MUC2, MUC5AC, MUC5B, and KLF4, with significance reached for MUC1 and KLF4 (p < 0.05, Figure 6). MUC1 and KLF4 expression were induced by 1.5-and 2.3-fold under the infected condition and returned at 0.9-and 1.2-fold of their basal expression levels with the lentil extract, respectively ( Figure 6A,H). Contrarily, the lentil extract favored the basal expression of MUC17 (2.4-fold induction, p < 0.05), and this effect was conserved after ETEC inoculation (1.3-fold compared to the non-treated control, p < 0.05, Figure 6F).
Yeast Cell Walls Strengthen Intestinal Barrier Function
As human ETEC strains and their virulence factors can potentially impact the epithelial barrier, the expression of tight-junction-related genes was also followed during the cellular experiments. Among the four genes that were studied (Figure 7), only CLDN1 was significantly induced by ETEC infection (1.6-fold induction, p < 0.05). Interestingly, this induction was reduced by the yeast cell walls to almost return to the basal level (p < 0.05, Figure 7A). TJP1 expression was also triggered by the lentil extract but only when ETEC strain H10407 was inoculated (3.4-fold induction, p < 0.05, Figure 7C). Considering these mitigated results, we decided to assess the effect of fiber-containing products on epithelial barrier permeability. When applied to the apical side of Caco-2/HT29-MTX transwells, after 2 h of contact, none of the tested products increased the absorption of caffeine ( Figure 7E) or atenolol ( Figure 7F), which were used as markers for transcellular and paracellular permeability [74,75], respectively. Yeast cell walls even significantly decreased caffeine absorption from 21.2 to 17.0% (p < 0.05, Figure 7E), and both products strongly reduced (p < 0.05) atenolol absorption, with 3.0-and 5.8-fold reductions for the lentil extract and yeast cell walls, respectively ( Figure 7F). Accordingly, fiber-containing products led to a rise in TEER over time, with significant 1.3-and 1.4-fold increases for the lentil extract compared to the non-treated condition at 120 and 180 min, respectively (p < 0.05, Figure 7G).
Yeast Cell Walls Mostly Impact Mucus-Associated Microbiota during ETEC Infection
To investigate the impact of dietary-fiber-containing products on ETEC interactions with luminal and mucosal gut microbiota, batch experiments inoculated with human feces were performed in flasks containing mucin-alginate beads. As expected, at the start of the experiment, the Escherichia/Shigella population became predominant in the luminal phase of infected bottles and represented 34% of the read count detected by 16S rRNA gene sequencing ( Figure 8C) and 15% of active bacteria by RNA fluorescent in situ hybridization ( Figure 8E). The proportion of ETEC or Escherichia/Shigella in the luminal phase remained stable during the experimental time course, regardless of the detection technique used ( Figure 8A,C,E). Dietary-fiber-containing products had no significant effect on Escherichia/Shigella or ETEC proportions in the luminal phase ( Figure 8A,C,E), but a decreasing trend in ETEC levels (1.7-fold lower) with yeast cell walls was observed ( Figure 8A). Concerning the mucosal compartment, in infected conditions, the number of adherent ETEC, as reported by qPCR, tended to be, respectively, 1.2-and 1.7-fold lower with the lentil extract and yeast cell walls compared to the non-treated control, but again, no significance was reached ( Figure 8B). The 16S rRNA gene sequencing showed a nonsignificant 33% decrease in adhered Escherichia/Shigella ASV under the yeast cell wall condition compared to the non-treated one ( Figure 8D).
Fiber Products Have No Significant Effect on ETEC Colonization in a Complex Microbial Background
To further explore the effects of dietary-fiber-containing products on gut microbiota composition, we performed 16S rRNA gene sequencing and bacterial community analysis. Regarding α-diversity, ETEC infection was associated with a significant decrease in α-diversity evenness in the luminal phase, but supplementation with fiber-containing products had no effect ( Figure 9B,C). Both infection by ETEC and supplementation with fiber products had no influence on species richness in the luminal phase ( Figure 9A) and on both species' richness and evenness in the mucosal phase ( Figure 9D-F).
Fiber Products Have No Significant Effect on ETEC Colonization in a Complex Microbial Background
To further explore the effects of dietary-fiber-containing products on gut microbiota composition, we performed 16S rRNA gene sequencing and bacterial community analysis. Regarding α-diversity, ETEC infection was associated with a significant decrease in αdiversity evenness in the luminal phase, but supplementation with fiber-containing products had no effect ( Figure 9B,C). Both infection by ETEC and supplementation with fiber products had no influence on species richness in the luminal phase ( Figure 9A) and on both species' richness and evenness in the mucosal phase ( Figure 9D-F). Figure 9. Impact of the dietary-fiber-containing products on the ETEC modulation of microbial community α-diversity. Batch experiments were performed using feces from six healthy donors, challenged or not with ETEC strain H10407 and treated or not with fiber-containing products. The graphs represent the variation in the microbiota species richness (A,D) and species evenness represented by the Simpson (B,E) and inverse Simpson indexes (C,F) at the ASV level. Samples were collected in both the luminal (A-C) and mucosal compartments (D-F). White, purple, brown, and yellow dots represent individual biological replicates at the beginning of the experiment after ETEC inoculation (Inoculation, T0) or after 24 h of fermentation in the non-treated (Non-treated, T24), lentil extract (Lentils, T24), or specific yeast cell walls (Yeast, T24) conditions, respectively. Black bars represent the means (n = 6). Results that are not significantly different from each other according to Tukey's multiple comparisons tests are grouped under the same letter (p < 0.05).
Concerning β-diversity, an NMDS analysis showed that the stool donor was the predominant explanatory variable for dissimilarities in gut microbiota composition in both the luminal and mucosal compartments ( Figure 10A). A PermANOVA analysis per- Figure 9. Impact of the dietary-fiber-containing products on the ETEC modulation of microbial community α-diversity. Batch experiments were performed using feces from six healthy donors, challenged or not with ETEC strain H10407 and treated or not with fiber-containing products. The graphs represent the variation in the microbiota species richness (A,D) and species evenness represented by the Simpson (B,E) and inverse Simpson indexes (C,F) at the ASV level. Samples were collected in both the luminal (A-C) and mucosal compartments (D-F). White, purple, brown, and yellow dots represent individual biological replicates at the beginning of the experiment after ETEC inoculation (Inoculation, T 0 ) or after 24 h of fermentation in the non-treated (Non-treated, T 24 ), lentil extract (Lentils, T 24 ), or specific yeast cell walls (Yeast, T 24 ) conditions, respectively. Black bars represent the means (n = 6). Results that are not significantly different from each other according to Tukey's multiple comparisons tests are grouped under the same letter (p < 0.05).
Concerning β-diversity, an NMDS analysis showed that the stool donor was the predominant explanatory variable for dissimilarities in gut microbiota composition in both the luminal and mucosal compartments ( Figure 10A). A PermANOVA analysis performed on the samples at T24h and excluding ASV1 (attributed to Escherichia/Shigella) confirmed that donor origin accounted for 10.0% of the dissimilarities (p < 0.001, 999 permutations). ETEC infection was also a significant source of variations and accounted for 6.0% of the dissimilarities (p < 0.001, 999 permutations), but dietary-fiber-containing products was not (p = 0.51). To go further, a db-RDA analysis was performed on samples at 24 h using "yeast", "lentil", and "infection" as explanatory variables. The db-RDA was able to cluster infected samples from non-infected ones more efficiently in the mucosal phase ( Figure 10B). If none of the tested products were able to modify the impact of infection on the gut microbiota structure, yeast samples clustered away from the rest in both the luminal and mucosal compartments, suggesting that the yeast cell wall product was responsible for some variations in the microbiota community structure, although only modestly ( Figure 10B). In the luminal phase, ETEC infection induced a global increase in Escherichia/Shigella ( Figure 8A) to the detriment of other groups, such as Bacteroides ( Figure 10C,D and Figure S2). At the genus and family levels, no clear difference in phylogenetic groups' relative abundances was observed between the control and treated conditions at 24 h in the luminal phase, apart from a light but consistent increase in Tannerellaceae/Parabacteroides by yeast cell walls regardless of the infection status ( Figure 10C,D, Figures S3 and S4). Compared to the luminal microbiota, the mucosal non-infected microbiota was depleted of Faecalibacterium and enriched in Clostridium, Roseburia, Bifidobacterium, and Lactobacillus, even if Lactobacillus colonization appeared to be donor-dependent ( Figure 10C,D and Figure S4). In the non-treated condition, ETEC infection tended to be constantly detrimental to the Clostridium and Bifidobacterium species representation on mucin beads, and the dietary-fiber-containing products tended to limit the Clostridium disappearance ( Figure 10C,D and Figure S4). In the luminal compartment, yeast cell walls seemed to reduce Faecalibacterium and Ruminococcaceae abundance and to favor Tannerellaceae/Parabacteroides, while in the mucosal compartment, they appeared to favor Tannerellaceae/Parabacteroides and commensal Escherichia/Shigella colonization. No clear trend was identified for the lentil extract ( Figure 10C,D, Figure S3 and S4).
Fiber-Containing Products Slightly Affect Gut Microbial Activities during ETEC Infection
In a last step, the effect of dietary-fiber-containing products on gut microbial activity during ETEC infection was assessed by following various indicators, such as SCFA, gas production, pH acidification, and gas pressure. We also investigated mucin-alginate bead degradation as a measure of the mucosal microbiota degrading capability. ETEC inoculation significantly impacted butyric acid production (p < 0.05, two-way ANOVA), with 1.3-, 1.4and 1.2-fold increases in the non-treated, lentils, and yeast conditions, even if no individual significances were reached ( Figure 11A). When added, the lentil extract and yeast cell walls increased propionic acid production by 10-20% and 30-40%, respectively, with only the yeast condition reaching significance (p < 0.05, Figure 11A). Regarding pH acidification, at 24h of fermentation, the pH tended to be increased by around 0.1 when ETEC was inoculated (p = 0.07, two-way ANOVA), with no significant effect from fibers ( Figure 11B). ETEC inoculation also tended to be associated with an increased pressure in the bottles at the end of the experiment (p = 0.08, two-way ANOVA, Figure 11C), with again, no significant impact of fibers. Gas analysis showed that CO 2 levels were significantly impacted by both ETEC and fiber-containing products compared to the non-treated and non-infected control conditions (p < 0.05, Figure 11D). However, the addition of fiber products exhibited no significant impact on gas composition under the infected condition. Lastly, dietary-fibercontaining products led to a decrease in mucin bead weight at 24 h and reached significance for the yeast cell walls in the infected condition (p < 0.01, Figure 11E). Yeast supplementation was indeed associated with an increase in bead degradation by 22 and 23% in the noninfected and infected conditions, respectively. In accordance with our observations, the microbial community structure of the infected samples correlated with pH and butyric acid production, and dietary-fiber-containing products had no effect ( Figure 12). Figure 10. Impact of dietary-fiber-containing products on ETEC modulation of microbial community β-diversity. Batch experiments were performed using feces from six healthy donors, challenged or not with ETEC strain H10407, and treated or not with fiber-containing products. (A,B) Non-parametric multidimensional scaling (NMDS) (A) and distance-based redundancy analysis (db-RDA) (B). Two-dimensional plot visualizations report the microbial community β-diversity at the ASV level, as determined by 16S rRNA gene amplicon sequencing. The db-RDA was performed on the ASV table, excluding the inoculation samples (T0) and ASV1 (attributed to the Escherichia/Shigella genus). Infection and fiber products were provided as the sole environmental variables (binary) and are plotted as vectors (arrows). White, purple, brown, and yellow dots represent individual biological replicates at the beginning of the experiment after ETEC inoculation (Inoculation, T0) or after 24 h of fermentation in the non-treated (Non-treated, T24), lentil extract (Lentils, T24), or specific yeast cell walls (Yeast, T24) conditions, respectively. The samples are represented by dot shapes and square shapes for the infected and non-infected conditions, respectively. The 95% confidence ellipse area is also indicated in a continuous line for the infected condition and in a dotted line for the non-infected conditions. The donor number is indicated for each sample. (C,D) Cumulative bar plots of the relative microbial community composition at the family (C) and genus (D) levels. The area graphs show the relative abundance of the 12 most abundant families and 16 most abundant genera in all six different donors confounded.
Fiber-Containing Products Slightly Affect Gut Microbial Activities during ETEC Infection
In a last step, the effect of dietary-fiber-containing products on gut microbial activity during ETEC infection was assessed by following various indicators, such as SCFA, gas production, pH acidification, and gas pressure. We also investigated mucin-alginate bead degradation as a measure of the mucosal microbiota degrading capability. ETEC inoculation Figure 10. Impact of dietary-fiber-containing products on ETEC modulation of microbial community β-diversity. Batch experiments were performed using feces from six healthy donors, challenged or not with ETEC strain H10407, and treated or not with fiber-containing products. (A,B) Nonparametric multidimensional scaling (NMDS) (A) and distance-based redundancy analysis (db-RDA) (B). Two-dimensional plot visualizations report the microbial community β-diversity at the ASV level, as determined by 16S rRNA gene amplicon sequencing. The db-RDA was performed on the ASV table, excluding the inoculation samples (T 0 ) and ASV1 (attributed to the Escherichia/Shigella genus). Infection and fiber products were provided as the sole environmental variables (binary) and are plotted as vectors (arrows). White, purple, brown, and yellow dots represent individual biological replicates at the beginning of the experiment after ETEC inoculation (Inoculation, T 0 ) or after 24 h of fermentation in the non-treated (Non-treated, T 24 ), lentil extract (Lentils, T 24 ), or specific yeast cell walls (Yeast, T 24 ) conditions, respectively. The samples are represented by dot shapes and square shapes for the infected and non-infected conditions, respectively. The 95% confidence ellipse area is also indicated in a continuous line for the infected condition and in a dotted line for the non-infected conditions. The donor number is indicated for each sample. (C,D) Cumulative bar plots of the relative microbial community composition at the family (C) and genus (D) levels. The area graphs show the relative abundance of the 12 most abundant families and 16 most abundant genera in all six different donors confounded.
Discussion
To date, only a few studies have investigated the potential anti-infectious properties of dietary fibers against the ETEC strains responsible for traveler's diarrhea in humans [18,38,39,76,77]. Using a large panel of complementary in vitro models, we showed that two fiber-containing products from legumes and microbes, namely, a lentil extract and a specific yeast cell wall from Saccharomyces cerevisiae, selected previously [19], were able to exert antagonistic effects towards the ETEC reference strain H10407 at various stages of the pathological process. These products from different origins contain various types of soluble and insoluble fibers, mainly resistant starch, cellulose, and hemi-cellulose for lentils [78] and mannans and β-glucans for yeast cell walls [79]. This variation could explain their differences in terms of the anti-infectious properties found in the present study. The two fiber products were tested at the in vivo relevant concentration of 2 grams per liter of final fiber content. This value was calculated based on the 10 to 30 grams of fibers consumed per day in industrialized countries [80,81] and the approximately 10 liters of fluid passing through the GI tract daily [82]. Of note, as the tested products were not pure, we cannot exclude that components other than fibers could exert anti-infectious properties against ETEC [83].
A first target in our study was to investigate if fiber products could affect pathogen growth in classical broth media. None of the tested compounds were able to impact the growth of ETEC strain H10407. This is not unexpected since, to our knowledge, only the human-engineered fiber chitosan has been reported to exert a bacteriostatic effect in vitro on diverse bacterial pathogens, such as enterohemorrhagic Escherichia coli (EHEC) [27]. We also showed that the lentil extract and yeast cell walls were able to sustain ETEC growth in M9 minimal medium, most likely due to the presence of non-fiber components, as E. coli strains are not able to degrade complex polysaccharides on their own [84,85]. We argue that this positive effect on pathogen growth may not be an issue in the context of the complex nutritional and microbial background of the distal small intestine, the main site of ETEC colonization [3,4,[86][87][88][89]. In the human gut, fibers are degraded into smaller carbohydrates by the endogenous gut microbiota, providing substrates for pathogens, such as ETEC, which generally behave as secondary degraders [90]. By performing fecal batch experiments including microbiota from human origin, we confirmed that dietary-fibercontaining products had no significant effect on ETEC colonization in a complex microbial environment, with only a slight tendency of yeast cell walls to reduce pathogen levels in both the luminal and mucosal compartments.
As toxin production is a key feature in ETEC physiopathology, our next step was to study the impact of fiber products on LT toxin. To our knowledge, only one study has previously reported an indirect effect of dietary fibers on ETEC toxins. SCFAs, major endproducts of dietary fiber metabolism by gut microbiota, have been shown to significantly reduce or even abolish LT toxin production at a concentration of 2 g·L −1 in CAYE culture medium [91]. Here, we showed that the LT toxin concentration was significantly reduced in culture medium by the lentil extract in a dose-dependent manner. This effect seems to be partly due to the toxin binding to some lentil components acting as a decoy, as previously reported by other groups with GM1 ELISA assays used with other carbohydrates [92]. Despite the involvement of several virulence genes in the ETEC infectious process (including those encoding for toxin production), data investigating the direct impact of dietary fibers on ETEC virulence gene expression are clearly missing in the literature. In this study, we investigated a panel of ETEC virulence genes in cellular assays. We demonstrated that such compounds could be used to modulate the induction of ETEC virulence gene expression by cellular proximity. Such induction was already reported by a previous study for ETEC strain H10407, but on non-mucus secreting Caco-2 cells [93]. Here, we showed that, at the transcriptional level, the eltB gene was consistently inhibited by the lentil extract. Dietary fiber supplementation is known to modulate the expression of genes involved in fiber degradation [85,94]. Only a few studies investigated the modulation of virulence genes. As an example, chitosan significantly modified Campylobacter jejuni genes involved in motility, quorum sensing, stress response, and adhesion [95]. Here, our study indicates that a toxin concentration decrease could be mediated by a direct inhibitory effect of the lentil extract on the LT toxin encoding gene expression.
Getting access to the epithelium is a crucial step for most intestinal pathogens to fulfill their infection cycle [96]. To this sole purpose, ETEC strain H10407 possesses two mucusdegrading enzymes [7,97] and numerous adhesins allowing mucosal adhesion [5,6]. To date, only milk oligosaccharides [38,39] and soluble plantain fibers at a dose of 5 g·L −1 [18] have shown the ability to reduce the adhesion of human ETEC strains (others than H10407) to a Caco-2 cell line. Here, we used a co-culture of enterocytes and mucus-secreting cells to more accurately mimic the physiological situation in the human intestine [42,98,99]. We first observed the inhibition of ETEC adhesion by both fiber-containing products on mucin beads. This anti-adhesive property cannot be explained by the sedimentation effect observed with insoluble fiber particles, as beads were always maintained under agitation. Only the yeast cell walls were able to reduce ETEC adherence in the more complex Caco-2/HT29-MTX model. Microorganism-derived polysaccharides have already shown adhesion inhibition properties against enteric pathogens [33,[100][101][102], but this is the first time that yeast cell walls reduced mucosal adhesion of an ETEC strain of human origin. By using yeast-alginate beads, we showed that ETEC strain H10407 presented a greater adhesion specificity for the yeast cell walls than for mucin, supporting a potential decoy effect of the product during pathogen adhesion. However, this observed decoy effect did not seem to involve mannose residues, as previously shown when living probiotic yeasts were used [17].
ETEC, as well as its virulence factors, is well known to be linked to innate immune activation and the induction of inflammation in epithelial cell lines, animals, and humans [11,[103][104][105][106][107][108], which could be positively associated with infection severity [12,109]. Here, as expected, we observed a general induction of cytokine-related genes upon ETEC H10407 exposure in cellular experiments [77]. Interestingly, the lentil extract showed a significant inhibitory effect on those genes, while the influence of yeast cell walls was more subtle. The most striking effect was observed on the pro-inflammatory IL-8 for which inhibition by fiber products was observed not only at the gene level but also at the protein level. The underlying mechanisms of dietary fiber modulation of the innate immune response are not clear. A study from He and colleagues, performed on a human ETEC strain, showed that human milk oligosaccharide 2'-fucosyllactose could modulate CD14 expression in infected enterocytes, thus attenuating LPS-induced inflammation [77]. Here, our results showed that the products exerted a basal anti-inflammatory effect (as shown with IL-8 production) but also led to an inhibition of the innate immune response activation, regardless of the inflammatory status (as shown with IL-10 expression), which could be the result of decreased interactions with innate immune receptors. The activation of innate immune receptors is known to ultimately stimulate mucus secretion [110][111][112]. Accordingly, we found in this study that mucus-related genes tended to be activated following ETEC infection and that this activation was limited by both fiber products, with a more significant effect of the lentil extract. Of note, as mucus secretion is involved in pathogen clearance from the mucosal epithelium [111], an inhibition of mucus-related genes by the lentil extract may be considered to be unfavorable in the fight against the ETEC pathogen.
The regulation of tight junctions in intestinal epithelial cells is one of the main means for the host to control epithelial permeability [113]. ETEC ST toxin variants have been largely described as modulators of paracellular permeability and more specifically of tight junctions [114][115][116]. In contrast, few studies have investigated the effect of whole ETEC bacteria on cell permeability, most of them being performed with pig and not human strains [117][118][119]. Kreisberg and colleagues reported that some human ETEC strains, including H10407, elicited a reduction in trans-epithelial electrical resistance (TEER) in T-84 epithelial cell monolayers, mediated by the LT toxin, which induced paracellular permeability [120]. In the present study, we showed that only the claudin-1 encoding gene was upregulated following ETEC challenge. Generally, the upregulation of tight-junction-related genes is regarded as beneficial for the host [121,122]. Meanwhile, we could presume that our observation may result from an activation of innate immunity interacting especially with tight junctions following ETEC infection [123][124][125][126]. When fiber-containing products were added, the most remarkable effects were observed with yeast cell walls, which abolished the ETEC induction of CLDN1 but also significantly decreased transcellular and paracellular permeability and increased TEER values. Up to now, no study conducted on ETEC of human origin has ever reported an attempt to modulate the induced changes in epithelial integrity with dietary-fiber-containing products. Contrarily, in vivo studies in pigs have already shown a beneficial effect on the intestinal barrier disruption of dietary fibers such as chitosan or fructooligosaccharides [127][128][129]. This positive effect may result from a lower innate immunity activation, as reported by decreases in TLR4 and CD14 expression [127,128] and serological cytokines [129]. However, we cannot also exclude a sedimentation effect of the fiber products upon the intestinal cells or a binding with the molecules used as permeability markers. We argue that, at least, the products are unlikely to be detrimental to cellular integrity. Of note, on the contrary, some authors reported detrimental effects of fibers such as cellulose and arabinoxylan [130], indicating that the outcomes are probably fiber-specific.
Evidence from previous in vitro and in vivo studies support an influence of ETEC strains on human gut microbiota [13,14,43,131,132]. As microbiota alterations can even favor enteric infections [133,134], we investigated the impact of ETEC strain H10407 on the gut microbiota structure and activity and how it can be further modulated by supplementation with fiber-containing products. None of the tested products were able to restore microbiota evenness that was, according to human in vivo data, decreased with ETEC infection [14]. We showed that ETEC inoculation was particularly detrimental to mucosal-associated Clostridium species, as already reported by Roussel et al. [43]. Supplementation with dietary-fiber-containing products enabled a slight but consistent (in most individuals) maintenance of Clostridium. Yeast cell walls also induced ETEC-unrelated changes in microbiota composition, with increases in Parabacteroides in both luminal and mucosal compartments. This result deserves more attention since Parabacteroides species have already been highlighted as a potential new generation probiotic species in intestinal inflammation-related diseases such as metabolic syndrome [135,136] and colorectal cancer [137]. Up to now, only Lactobacillaceae have been regularly highlighted as probiotic species with anti-infectious properties against human ETEC strains [138][139][140]. Here, one donor was particularly colonized by Lactobacillaceae, and this bacterial population was found to be enriched on mucin beads by yeast cell walls in the infected condition. Interestingly, this donor was also the one with the lowest proportion of Escherichia/Shigella on mucin beads. Regarding gut microbial activity, we showed that ETEC inoculation had contradictory effects on fermentation activities, increasing butyric acid production, gas pressure, and CO 2 level but limited pH acidification. This may result from ETEC mucinase activities, leading to higher substrate availability for fermentation, combined with E. coli acid resistance systems, which notably consume H + to produce H 2 O, H 2 , and CO 2 [141]. Up to now, only two in vitro studies had evaluated the effect of ETEC on human gut microbial activity [43,132]. However, major differences in experimental conditions hampered any comparison. When added, fiber-containing products had a small impact on ETEC-induced changes in microbiota activity. Unsurprisingly, they only seem to favor even more fermentation activities (e.g., fermentation gases). Lastly, since previous studies have elegantly shown in mice that dietary fiber intakes limited pathogen infection by protecting the mucus layer from degradation [36,142,143], we measured the total weight of mucin beads at the end of batch experiments. However, this previous hypothesis was not confirmed here, probably because of the use of simple batch experiments, which did not include goblet cells or allow the continuous supply of fiber sources or a renewal of luminal content.
Conclusions
Using a large panel of in vitro models, this study demonstrated that fiber-containing products, namely, a lentil extract and yeast cell walls, can exert anti-infectious activities against the human reference strain ETEC H10407. Tested products were found to interfere with the ETEC infection process during virulence gene expression, cell adhesion, cross talk with intestinal host cells, and interactions with gut microbiota. Even if the products were not pure fibers, these results are encouraging for further mechanistic investigations. Next steps should be dedicated to the study of dietary fibers/ETEC interactions in more complex and dynamic multi-compartmental models of the human GI tract, such as the TNO intestinal model (TIM) or the Simulator of the Human Intestinal Microbial Ecosystem (SHIME) before going further in animal models, where we can evaluate their effect on the whole organism (e.g., prevention of diarrhea). These findings reveal important implications regarding how our immediate diet history may modify susceptibility to some enteric diseases but also provide meaningful insights in the use of low-cost dietary-fiber-containing products as a relevant prophylactic strategy in the fight against ETEC infections and traveler's diarrhea.
Supplementary Materials: The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/nu14102146/s1, Figure S1: Effect of dietary-fiber products and ETEC H10407 infection on intestinal cell viability; Figure S2: Cumulative bar plots of fiber-containing products and ETEC modulation of microbiota composition at the family level, excluding ASV1; Figure S3: Donor-specific impact of dietary-fiber-containing products on ETEC modulation of microbiota β-diversity at the family level; Figure S4: Donor-specific impact of dietary-fiber-containing products on ETEC modulation of microbiota β-diversity at the genus level. Informed Consent Statement: Written informed consent was obtained from all donors prior to fecal collection.
Data Availability Statement:
The 16S RNA gene amplicon sequencing data were deposited and are publicly available in the NCBI Sequence Read Archive database with accession number PRJNA802368. | 2022-05-23T15:03:02.200Z | 2022-05-01T00:00:00.000 | {
"year": 2022,
"sha1": "b7611be48bcfd81dfef5b131aca9f4c7439fcf41",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3390/nu14102146",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3efc74544f65302c7023a2eb29d663998362e94b",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science",
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Medicine"
]
} |
269791278 | pes2o/s2orc | v3-fos-license | The effect of temperature oscillations on energy storage rectification in harmonic systems
Rectification, the preferential transport of a current in one direction through a system, has garnered significant attention in molecules because of its importance for controlling thermal and electronic currents at the nanoscale. Here, we report the presence of energy storage rectification effects in a molecular chain. This phenomenon is generated by subjecting a harmonic molecular chain to an oscillating temperature gradient and showing that the energy absorption rate of the system depends on the direction of the gradient. We examine how the energy storage rectification ratios in the chain are affected by the oscillating gradient, asymmetry in the chain, and the system parameters. We find that energy storage rectification can be observed in harmonic lattice structures with time-dependent temperatures and that, correspondingly, anharmonicity is not required to generate this rectification mechanism in such systems.
I. INTRODUCTION
Rectification in molecular systems has been wellstudied because of its importance in the design of nanoscale technologies [1][2][3][4][5][6][7][8][9][10][11].Molecular rectification can be broadly classified into two types: thermal and electronic.Thermal rectification is a transport phenomenon defined by the preferential flow of heat in one direction through a system [1-6, 12, 13].It occurs when a temperature gradient generates a large heat current in one direction through a system, but when the direction of the temperature gradient is reversed, only a small relative amount of heat flows in the opposite direction.Thermal rectification is analogous to the electronic rectification processes that occur in electronic diodes [7][8][9][10][11].Thermal rectifiers/diodes have received significant research interest due to their potential applications in various technologies [1][2][3][4][5][6][14][15][16][17][18][19][20][21].Electronic rectification is fundamental in many electrical circuit designs where it is used to control the directionality of electrical currents and to switch a system between different states for logical operations and information processing.
Molecular thermal rectification-the preferential flow of heat in one direction through a nanoscale molecular system-is a complex phenomenon that can involve multiple heat transport processes including vibrational, electronic, and radiative mechanisms, and the interplay between them [2, 14? -20].Molecular thermal rectification falls under the broad umbrella of nanoscale energy transport, a research area with broad importance due to its relevance in the design of new nanotechnologies [19,.Theoretical descriptions of nanoscale and molecular energy transport have been actively developed for decades [18, 21-23, 25-27, 30, 45-55].These studies have advanced our fundamental understanding of nanoscale energy transport and how it can be utilized to design new * galen.craven@gmail.comnanotechnologies and devices [19,37,[56][57][58][59].
It has recently been observed that subjecting a system to a time-periodic temperature gradient can alter the energy transport properties of the system in comparison to a static temperature gradient and give rise to novel and emergent transport phenomena [60][61][62].Time-dependent temperatures can be used to affect system properties across various length scales [63][64][65][66] including to induce thermal rectification in macroscopic solid-state systems [67].Temperature modulations play an important role in the function of a multitude of systems including energy storage technologies [68,69], energy harvesting materials [70][71][72], heat transport devices [73][74][75], and thermal logic devices with memory [76,77].Therefore, developing theoretical tools to describe these processes and using those tools to discover new energy transport mechanisms is important in a diverse range of research fields.
In this article, we report a rectification mechanism that differs from thermal or electronic rectification.This phenomenon is termed energy storage rectification-an effect in which the amount of energy stored by a system depends on the direction of an applied thermal gradient.We specifically examine energy storage rectification and how it can induced and then controlled using a temperature gradient that is oscillating in time.We present results that illustrate how an oscillating temperature gradient affects energy storage rectification ratios in a harmonic molecular chain.We apply the formalism developed previously by us in Ref. 61 and Ref. 62 to calculate the energy fluxes and the corresponding energy storage properties in the model.This formalism uses a nonequilibrium Green's function approach adapted to treat the non-stationary distributions that arise when the bath temperatures are time-periodic.Here, this formalism is applied to understand energy storage in harmonic chains.We calculate the energy flux in/out of the chain under forward and reverse time-periodic thermal bias conditions and then compare their ratio.We find that energy storage rectification effects can be observed in har-arXiv:2405.09774v2[cond-mat.mes-hall]17 May 2024 monic chains and that, correspondingly, anharmonicity is not required to observe these effects [20,[78][79][80].
II. MODEL
The model we consider is a one-dimensional harmonic chain of N particles connecting two heat baths with temperatures that are oscillating in time.The two heat baths are labelled L for "left" and R for "right", respectively.The equations of motion for this system are: where m j is the mass of the jth particle, x j is the displacement of particle j from its equilibrium position, k is the harmonic force constant between particles, γ L and γ R are coupling constants between the system and the respective baths, and ξ L (t) and ξ R (t) are stochastic forces that obey the correlations where k B is the Boltzmann constant and T L (t) and T R (t) are the time-dependent and oscillatory temperatures of the left and right bath, respectively.Details on the specific functional forms we use for the time-dependent temperatures are given below.It is important to note that the harmonic system in Eq. (1) with correlations in Eq. (2) does not exhibit thermal rectification effects, even in the presence of temperature oscillations [20,[78][79][80][81].Our aim in this article is to examine if energy storage rectification effects can be generated in a harmonic system by applying an oscillatory temperature gradient.
To examine energy storage rectification, we consider two states for the time-dependent thermal gradient between heat baths: a baseline state denoted by "+" and a state with the temperature gradient reversed denoted by "−".In state "+", the bath temperatures take the specific forms: where are the temperatures of the two baths in the limit of vanishing of temperature oscillations, ∆T L and ∆T R are the amplitudes of the oscillations, and ω L and ω R are oscillation frequencies.The system-bath couplings are γ L and γ R and we define γ = γ L + γ R .In this article, we will always consider cases in which ω L and ω R are commensurate so that the system is periodic with total period T .In the baseline state, the temperature difference between the two baths is and the noise correlations are In state "−", the temperature gradient is reversed with and the temperature difference between the baths is The noise correlations in the reverse thermal bias state are Note that the system-bath couplings γ L and γ R are also reversed in this state.Figure 1 is a schematic diagram of the model in the two thermal bias states.
The oscillating temperature gradient gives rise to timedependent energy fluxes in the applied model.Those energy fluxes can be separated into three terms: (1) J sys is the energy flux in/out of the system, i.e., the molecular chain, (2) J L is the energy flux associated with the left bath and (3) J R is the energy flux associated with the right bath.In the limit of constant bath temperatures, the system will reach a steady state in which the system energy flux vanishes, J sys = 0. But, in the systems examined here, the system energy flux does not vanish because of the temperature oscillations [14,62].The system energy flux in the respective thermal bias state is where is the total energy of the chain, ⟨⟩ represents an ensemble average, and the superscripts ± denote that the expectation value is evaluated in the corresponding bias state.The energy fluxes for the left and right baths in each bias state are The functional forms for these expressions can be derived using a stochastic energetics formalism [24,82].Because the bath temperatures are periodic in time, the model will not reach steady state defined by ∂ t E(t) = 0 and J L (t) = −J R (t).Instead, it approaches a time-dependent nonequilibrium state with an average energy that is oscillating in time and, therefore, a system energy flux J sys (t) that is not equal to zero for all t.The oscillating system energy implies that the system is storing and releasing energy as the temperature gradient is oscillating.Here, we address the question: Does the direction of the timeperiodic temperature gradient affect energy storage and release?
To quantify the extent of energy storage rectification, we calculate the energy absorbed by the system over a period of oscillation T in each of the two thermal bias states "+" and "−" and compare them.The energy stored by the system over one period of oscillation is [14,[83][84][85]] where Θ is the Heaviside function.We term this quantity the energy storage capacity.It is important to note that over a period of oscillation, the total system energy storage is offset by the total energy release, meaning that T 0 J ± sys (t ′ )dt ′ = 0.The energy storage rectification ratios are defined using When R storage ̸ = 1, energy storage rectification is observed.
To generate asymmetry in the system (a property that is necessary for rectification) we vary the masses along the chain.Specifically, we use two different models, Model I and Model II, for the masses in Eq. (1).In Model I, we generate a mass gradient by taking the mass of the ith particle in the chain to be m i = m max − (i − 1)(m max − m min )/(N − 1) where m max is the mass of particle 1 and m min is the mass of particle N [86] with m max > m min .In Model II, each particle i in the chain has the same mass m i = m except for a single massive particle at position n that has mass m n ≫ m i ∀ i ̸ = n.Therefore, Model I describes a chain with a mass gradient and Model II describes a chain with single point mass disorder.Both models lead to mass asymmetry.
III. RESULTS
We have previously developed a formalism in Ref. 62 to calculate the energy fluxes in a harmonic chain using a nonequilibrium Green's function technique adapted to treat the non-stationary distributions that arise when the bath temperatures are time-periodic.Here, for brevity, we will apply those results without details or mathematical exposition, but for a complete description of the applied theoretical approach we refer the reader to our previous work [62].
Figure 2 shows the system energy flux J sys (t) in each of the two thermal bias states (baseline and reverse) as function of time for two sets of system parameters.The values of J + sys (t) and J − sys (t) are represented respectively by the red and blue curves.The results in Fig. 2 (a) are for a chain with N = 2 particles where one of the particles has more mass than the other (Model I).This mass asymmetry in the chain generates a mass gradient.The temperature oscillation frequencies are ω L = 2 and ω R = 3.It can be seen that J + sys (t) has a different functional form than J − sys (t).In this case, the oscillation phase of the system energy can be significantly different in the two bias states.This implies that the energy storage in each of the two states Q ± storage will be different, i.e., that energy storage rectification effects could be observed.Results for a chain of N = 5 particles are shown in Fig. 2 (b).In this case, only the left bath temperature is oscillating, ω L = 5, while the other bath temperature is constant.Again, J + sys (t) is different from J − sys (t).The primary observation in Fig. 2 is that the system energy flux depends on the thermal bias state, i.e., the direction of the thermal gradient, a phenomenon that gives rise to energy storage rectification effects.It is important to note that we do not observe any thermal rectification effects in any of the models examined in this manuscript.
As illustrated in Fig. 2, asymmetric chains can show differences in peak magnitudes and functional forms for the system energy fluxes in the different bias states.In a symmetrical chain, however, the system energy flux will be the same in both bias states, J + sys (t) = J − sys (t), and there will be no energy storage rectification.We have confirmed that the numerical codes used to generate the results presented here give this result for the case of a chain with no mass asymmetry.Also, note that rectification will only be observed away from the quasistatic limit which is the limit in which the baths oscillate slowly with respect to the system-bath couplings (ω L /γ, ω R /γ → 0, 0).In the quasistatic limit, J ± sys = 0 and so R storage → 1, so there is no rectification in this limit.
Figure 3 shows the energy storage results for Model I (a chain with a mass gradient) as a function of increasing mass gradient magnitude for a chain with N = 5 particles.Varying the maximum mass m max while holding m min = 0.1 constant gives rise to nonlinear behaviors in the energy storage capacities, as shown in Fig. 3(a).The energy storage capacities in both the baseline and reverse bias states are nonmonotonic and nonlinear with respect to variation of the mass gradient magnitude.This gives rise to behaviors in which the rectification ratio oscillates as the mass gradient is varied, as shown in Fig. 3 (b).This result implies there are frequency-dependent resonances in the phononic density of states that are more strongly influenced in one of the bias states than the other.We observe maximum energy storage rectification ratios up to ≈ 1.8 in this system.This illustrates that by reversing the thermal bias state in a system with oscillating temperatures, the energy storage capacity can be significantly different in one state compared to the other.
Shown in Fig. 4(a) are the values of Q ± storage in Model II, a chain with a single point mass disorder.In this model, there is one massive particle in a chain of particles with otherwise uniform masses.The x-axis shows the position of the massive particle.It can be observed that the energy storage values do not change smoothly with variation of the site of the mass disorder, instead oscillating (zig-zag) patterns are observed.This means a small alteration of mass disorder can change significantly change the energy storage capacity in the two thermal bias states.The energy storage capacity of both bias states is symmetric for n = 6.This is expected because this is the case when the massive particle is located in middle of the chain.The system is symmetric in this case and so the energy storage capacity is the same both states.The corresponding rectification ratios are shown in Fig. 4 (b).Note that the examined system is completely harmonic and that no anharmonicities exist in the particle-particle interactions.The rectification ratios vary between approximately 0.6 and 1.6 depending on the location of the mass disorder in the chain.When the massive particle is located on the exact middle of the chain n = 6, there is no rectification (R storage = 1) as expected for a symmetric system.A related observation is that it is not necessarily true that the rectification is smaller when the mass disorder is closer to the middle of the chain.In fact, the highest rectification value occurs when the massive particle is located at the two positions (n = 5 and n = 7) adjacent to the middle symmetric site.This means that the rectification effects cannot be generally correlated with the degree of deviation of the center-of-mass of the chain from the symmetrical position.Figure 5 illustrates how the energy storage capacity in each bias state changes as the system-bath coupling strengths are varied for a model with single point mass disorder (Model II).Here, we take the couplings for each bath to be equal, γ L = γ R , so the x-axis represents the value of both couplings simultaneously.The values of Q + storage are shown in red and Q − storage in blue, and results are shown for two different temperature oscillation frequencies as labeled in the plot.The energy storage capacities can be significantly altered by changing the system-bath coupling strengths.
There are three important physical regimes that can be observed.As γ L and γ R go to zero, the coupling between the chain and the thermal baths is too small to facilitate the amount of energy transfer from the baths to the system needed to observe significant energy storage.In this regime, Q ± storage → 0. In another regime, as γ L and γ R are increased from zero, the energy storage increases, goes through a maximum and then begins to decrease.This turnover behavior is analogous to Kramer's turnover behavior which is observed in chemical reaction rates and thermal conductance properties [33,87,88].As γ L and γ R become large, Q ± storage → 0. This is the quasistatic limit in which J ± sys = 0 due to slow temperature oscil- lations relative to the system-bath coupling frequencies.In the quasistaic regime, the system dissipates energy into the baths on a time scale much faster than the temperature oscillation frequency, and therefore the system energy flux vanishes.The rectification ratios for this system exhibit a similar turnover behavior with R storage → 1 as the system-bath couplings go to 0 and R storage → 1 in the opposite limit of strong system-bath coupling as well.In between these two limiting regimes, the energy storage rectification effects will be maximized.
The rectification effects can also be altered by varying the length of the chain.Figure 6 illustrates this behavior in Model II.The energy storage capacities shown in Fig. 6(a) show that different chains lengths have different storage capacities and that the effect of varying the chain length is different in each bias state.Figure 6(b) shows the corresponding rectification ratios.The primary observation is that rectification varies nonmonotonically and nonlinearly as the chain length is varied.This behavior can be attributed to the time-dependent populations of vibrational modes in the system due to the temperature oscillations.In essence, the oscillating temperatures induce a time-dependent spectrum in which the vibrational populate and over time by factors that are nonlinearly proportional chain length.Varying the chain length can also significantly affect the magnitude of the rectification ratio with values up to ≈3.0 observed for this set of system parameters.Overall, several observations are of note: • Energy storage rectification effects can be generated due to temperature gradient oscillations.
• We do not observe thermal rectification.• There must be a system energy flux to observe energy storage rectification.This only occurs away from the quasistatic limit, i.e., the limit in which J ± sys = 0 due to slow temperature oscillations relative to the system-bath coupling frequencies.Rectification will only be observed away from this limit.
• A turnover behavior is observed in the energy storage rectification with respect to variation of the system-bath couplings.This effect is similar to Kramer's turnover.
• We have illustrated the rectification effects over a limited parameter range.A comprehensive study of how each parameter in the model affects rectification is an important next step.
IV. CONCLUSIONS
We have examined energy storage rectification in a harmonic molecular chain in the presence of a temperature gradient that is oscillating in time.The presented results illustrate how an oscillating temperature gradient modifies energy transport through a molecular lattice leading to rectification effects.These effects arise due to differences between the system-bath relaxation rates and other frequencies in the system, for example the oscillation frequency of the temperature gradient, that generate non-stationary distributions in the molecular chain.We do not observe any net thermal rectification effects in the applied harmonic model, however future work on anharmonic systems is an important next step in this direction.We have demonstrated that harmonic systems can exhibit energy storage rectification effects.Overall, the time-periodic modulation of a temperature gradient can affect the energy absorbing properties of nanoscale and molecular systems and give rise to emergent phenomena.Our results open a new design strategy for molecular devices, capacitors, and batteries with energy stor-age properties and power cycles that are controlled using time-dependent temperature oscillations.
V. ACKNOWLEDGMENTS
We acknowledge support from the Los Alamos National Laboratory (LANL) Directed Research and Development funds (LDRD).This research was performed in part at the Center for Nonlinear Studies (CNLS) at LANL.The computing resources used to perform this research were provided by the LANL Institutional Computing Program.
FIG. 1 .
FIG.1.Schematic diagram of a molecular chain connecting two heat baths with oscillating temperatures in the baseline " + " (top) and reverse " − " (bottom) thermal bias states.The temperatures of the left bath and right bath, TL(t) and TR(t), are oscillating in time as illustrated by the graphs on left and right of the figure.In the reverse state, the temperatures of the two baths are swapped: TL(t) → TR(t) and TR(t) → TL(t).The corresponding system-bath couplings γL and γR are also swapped.The system energy flux ∂tE(t) is different in each thermal bias state as illustrated by the graphs in the middle of the figure.
FIG. 3 .
FIG. 3. (a) Energy storage capacity of Model I calculated as a function mass gradient quantified by varying mmax while keeping mmin = 0.1 constant.The chain length is N = 5.The energy storage is calculated using Eq. 15 and shown in units of kB T .(b) The corresponding energy storage rectification ratio as a function of mmax.Parameters are γL = γR = 2, k = 250, T (0) L = 1.5, T (0) R = 1.0, ∆TL = 0.1, ∆TR = 0, ωL = 5, and ωR = 0.All values are given in reduced units as specified in the caption of Fig. 2.
FIG. 4 .
FIG. 4. (a) Energy storage capacity of Model II as a function of single-point mass disorder.Each particle in the chain has a mass of m = 1 except for a single massive particle at location n (shown on the x-axis) with mn = 10.The values of Q + storage are shown in red and Q − storage in blue.(b) Energy storage rectification ratio for the same system as a function of the location of the massive particle in the chain.Parameters in both panels are γL = γR = 2, k = 250, T (0) L = 1, T (0) R = 1.5, ∆TL = ∆TR = 0.1, ωL = 3, and ωR = 2.
FIG. 5 .
FIG. 5. Energy storage capacity as a function of the systembath couplings γL and γR with γL = γR for Model II.The values of Q + storage are shown in red and Q − storage in blue.The top set of curves are for ωL = 1 and the bottom set of curves are for ωL = 5.The mass of each particle in the chain is m = 1 except m2 = 100.Other parameters are N = 10, k = 250, T (0) L = 1, T (0) R = 1.5, ∆TL = 0.1, ∆TR = 0, and ωR = 0 .All units are given in reduced units as specified in the caption of Figure 2.
FIG. 6 .= 1 . 5 ,
FIG. 6.(a) Energy storage capacity of Model II as a function of chain length N .Each particle in the chain has a mass of m = 1 except for the first particle which has mass m1 = 100.The values of Q + storage are shown in red and Q − storage in blue.(b) Energy storage rectification ratio for the same system as a function of chain length.Parameters in both panels are γL = γR = 2, k = 2500, T (0) L = 1, T (0) R = 1.5, ∆TL = ∆TR = 0.1, ωL = 3, and ωR = 2. | 2024-05-17T06:44:22.457Z | 2024-05-16T00:00:00.000 | {
"year": 2024,
"sha1": "fcb13afaad7c614472a8a50aac8979618659289f",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1088/1361-648x/ad5d40",
"oa_status": "HYBRID",
"pdf_src": "ArXiv",
"pdf_hash": "ee0de1b99435298c1c241b04cfc700c659158c22",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Medicine"
]
} |
212677336 | pes2o/s2orc | v3-fos-license | A comparative study on the clinical features of COVID-19 pneumonia to other pneumonias
Abstract Background A novel coronavirus (2019-nCoV) has raised world concern since it emerged in Wuhan Hubei China in December, 2019. The infection may result into severe pneumonia with clusters illness onsets. Its impacts on public health make it paramount to clarify the clinical features with other pneumonias. Methods Nineteen 2019-nCoV pneumonia (NCOVID-19) and fifteen other pneumonia patients (NON-NCOVID-19) in out of Hubei places were involved in this study. Both NCOVID-19 and NON-NCOVID-19 patients were confirmed to be infected in throat swabs or/and sputa with or without 2019-nCoV by real-time RT-PCR. We analyzed the demographic, epidemiological, clinical, and radiological features from those patients, and compared the difference between NCOVID-19 and NON-NCOVID-19. Results All patients had a history of exposure to confirmed case of 2019-nCoV or travel to Hubei before illness. The median duration, respectively, was 8 (IQR:6~11) and 5 (IQR:4~11) days from exposure to onset in NCOVID-19 and NON-NCOVID-19. The clinical symptoms were similar between NCOVID-19 and NON-NCOVID-19. The most common symptoms were fever and cough. Fifteen (78.95%) NCOVID-19 but 4 (26.67%) NON-NCOVID-19 patients had bilateral involvement while 17 (89.47%) NCOVID-19 but 1 (6.67%) NON-NCOVID-19 patients had multiple mottling and ground-glass opacity of chest CT images. Compared to NON-NCOVID-19, NCOVID-19 present remarkably more abnormal laboratory tests including AST, ALT, γ-GT, LDH and α-HBDH. Conclusion The 2019-nCoV infection caused similar onsets to other pneumonias. CT scan may be a reliable test for screening NCOVID-19 cases. Liver function damage is more frequent in NCOVID-19 than NON-NCOVID-19 patients. LDH and α-HBDH may be considerable markers for evaluation of NCOVID-19.
Background
In the end of 2019, a novel coronavirus (2019-nCoV) emerged in Wuhan Hubei province, China [1]. Reports showed that the 2019-nCoV infection caused clusters onset similar to severe acute respiratory syndrome coronavirus (SARS) [1,2]. Previous study has shown that coronaviruses can cause respiratory and intestinal infections in animals and humans [3].
Generally, coronaviruses were not considered to be highly pathogenic to humans until the outbreak of severe acute respiratory syndrome (SARS) in 2002 and 2003 in Guangdong, China [4,5]. Another highly pathogenic coronavirus, Middle East respiratory syndrome (MERS) coronavirus emerged in Middle Eastern countries in 2012 [6]. 2019-nCoV is one more highly pathogenic coronavirus to human in history.
The virus has raised world concern because of its high transmission capability as well as high mobility and mortality [2,[7][8][9]. As of 14 th Feb 2020, more than 60000 cases with over 8000 severe patients infected with the virus have been reported, and more than 1500 patients died.
In addition to China, the patients have been detected in 25 countries globally. Early reports showed that almost all of confirmed patients have evidence of pneumonia [7,9]. However, pneumonias are very common during a time of year when respiratory illnesses caused by other pathogens infection are highly prevalent [10,11]. So it is a very hard time for public health as well as doctors in this outbreak.
In this study, we investigated the clinical features of 19 confirmed 2019-nCoV pneumonia cases and 15 2019-nCoV negative confirmed pneumonia patients (NCOVID- 19) with a history of travel to Hubei or exposure before illness to NCOVID-19 confirmed patients to describe the potential differences of clinical features between the two diseases.
Data collection
We reviewed clinical charts, nursing records, laboratory findings, and chest x-rays for all NCOVID-19 and NCOVID-19 patients. The admission data of these patients were from Jan 23 to Feb 5, 2020. Epidemiological, clinical, laboratory, and radiological characteristics data were obtained with standardized data collection forms from electronic medical records. Investigators interviewed each patient and their relatives, where necessary, to determine exposure or close contact histories during the 2 weeks before the illness onset. To ascertain the epidemiological and symptom data, which were not available from electronic medical records, the researchers also directly communicated with patients or their families to ascertain epidemiological or symptom data. If data were missing from the records or clarification was needed, we obtained data by direct communicating with attending doctors and other healthcare providers. All data were checked by two physicians.
Statistical analysis
The quantitative blood laboratory tests were compared by Mann-Whitney U test. The categorical variables were expressed as number (%) and compared by Fisher's exact test.
Differences were considered significant at p< 0.05 with a two-tailed test. All analysis was performed using Instat software (Vision 5.0, GraphPad prism).
Illness onset features of patients
To decrease the possible affect on laboratory results, we selected those patients with similar duration between nCoV and non-CoV in this study. The median duration, respectively, was 5 (IQR: 3~9) and 4 (IQR:2~7) days from onset to admission in NCOVID-19 and NON-NCOVID-19 patients. It was no statistical difference between them. On admission, the most common symptoms at onset of illness were fever and cough in both NCOVID- 19 (table 2). In comparison, no significant differences were observed between NCOVID-19 and NON-NCOVID-19 patients on these onsets.
The features of CT images
On admission, of the 19 NCOVID-19 patients, 15 (78.95%) had bilateral involvement (table 2). Similar with previous reports [13], the typical feature is multiple lobular ground-glass opacity ( Figure (Table 3). One of NCOVID-19 patient with abnormal LDH also showed abnormal CK (365 U/L) while there was no significant difference between CK levels of NCOVID-19 and NCOVID-19 patients. In addition, most of both NCOVID-19 and NON-NCOVID-19 patients presented increased levels of CRP and IL-6 whereas no significant difference was observed between the two grouped patients.
Creatinine levels of all patients were normal (data not shown).
Outcome and treatments
By the end of 14th Feb 2020, no patient need be admitted to Intensive Care Unit (ICU) and administered with mechanical ventilation in these investigated NCOVID-19 and NON-NCOVID-19 patients. Except for two NCOVID-19 patients had a transient decreasing of pulse oxygen saturation (SpO2) (92-93%) on admission, SpO2 of others kept at 95%-99%.
All of NCOVID-19 patients were treated with antiviral drug lopinavir and ritonavir tablets and symptomatic supports while NON-NCOVID-19 patients were treated with antibiotics (moxifloxacin) and other symptomatic supports. Besides drug treatments, a large proportion of the ingredients were psychological counseling for these NCOVID-19 patients because of the panic and anxiety to the illness.
Discussion
The 2019-nCoV which caused severe illness has impacted multiple countries in the world, and sustained human-to-human transmission making it a world concerning and serious public health threat [14]. So far, it is unclear when it will be end actually. However, the caused symptoms of the virus are similar to those of influenza (e.g., fever, cough, or sore throat), and the outbreak is occurring during a time of year when respiratory illnesses from influenza, respiratory syncytial virus, and other respiratory viruses are highly prevalent. It is a very important for clinics to indentify the infected patients.
We report here a comparative analysis on 19 pneumonia patients with laboratory-confirmed 2019-nCoV infection and 15 pneumonia patients without 2019-nCoV infection. All patients had the exposure history to confirmed 2019-nCoV patient or traveled back from Hubei before illness. The epidemiology data showed the two group patients presented onsets after mean one week around. Similar symptoms were presented by both group patients. Fever and cough were the most common symptoms. These symptoms are also common in other acute respiratory infections such as influenza, respiratory syncytial virus and other respiratory viruses, which may be associated with the difficult control of this epidemic.
Early Classification of patients is necessary to prevent and control these epidemics when emergency management have to be conducted in some outbreaks like SARS and 2019-nCOV [15,16]. Previous suggested that CT scan was a useful tool to screen the suspected cases of 2019-nCOV infection [13]. In this study, our data also showed that CT images had remarkably significant difference between NCOVID-19 and NON-NCOVID-19 patients.
Most NCOVID-19 patients but NON-NCOVID-19 had bilateral pneumonia with the feature of a multiple mottling and ground-glass opacity in CT images. In addition, somewhat like severe influenza (e.g. H7N9, H1N1pdm 09) [17,18], inflammation spread quickly in lungs of NCOVID-19 patients. CT scan may be a reliable test for screening NCOVID-19 or NCOVID-19 patients, will compact quick classification of suspected cases or common patients.
In terms of laboratory tests, the absolute value of lymphocytes in most NCOVID-19 and NON-NCOVID-19 patients was reduced. This result suggests that 2019-nCoV infection has similar feature with many other respiratory virus infections, triggered strong innate inflammatory immune response, and caused depletion of lymphocytes after infection [19][20][21][22].
Inflammation is a time-depend process, usually starting locally, and is recognized centrally later via blood born mediators [23]. Previous studies suggested that excessive immune response played an important role on pathogenesis of severe influenza or SARS [24]. And IL-6 and CRP may link to the excessive immune response [25,26]. In this study, our results also showed abnormally increased CRP and IL-6 in most of both NCOVID-19 and NON-NCOVID-19 patients. In our results, the ratio mean of neutrophils is slightly higher in NCOVID-19 than in NON-NCOVID-19 although no statistic difference between them. That might be related to no severe cases involved in this study because numbers of neutrophils was much higher in severe NCOVD-19 than relatively mild NCOVID-19 in Early report [2].
Previous studies have shown that excessive neutrophils contributed to acute lung damage, and are associated with severe disease and fatality in patients with influenza infection [27,28]. Hence, possibly, excessive immune of host may be associated with the pathogenesis of NCOVID-19 besides virus-specific factors.
Previous reports showed that a proportion of NCOVID-19 patients had differing degrees of liver function abnormality [2,7]. Our data showed that the levels of liver function associated markers (ALT, AST and γ-GT) were significantly higher in NCOVID-19 patients than in NON-NCOVID-19 patients, and a proportion of NCOVID-19 patients (AST, 26.67%; ALT 27.78%; γ-GT, 44.44%) but NON-NCOVID-19 patients presented abnormal levels of these markers, suggested that acute liver damage was more frequent in NCOVID-19 patients than NON-NCOVID-19 patients. This was also observed in SARS or severe influenza (e.g. H7N9) patients [17,29]. In addition, LDH showed abnormal level in a proportion of NCOVID-19 patients (31.58%) but in NCOVID-19 patients. And available data showed that most of NCOVID-19 patients (75%) but NCOVID-19 patients (20%) had an abnormal α-HBDH. The results suggested that 2019-nCoV infected patient may result into multiple tissues or organs damage besides liver injury.
As for treatment, all NCOVID-19 patients were diagnosed and treated in out of Wuhan places in this study. And all NCOVID-19 patients in this study didn't have severe complication like ARDS or multiple organ failure which was reported in Wuhan patients or SARS patients during the admission [2,7,29,30]. However, as for a novel disease, common people have more panic and anxiety on it than other diseases. Psychological counseling should be involved in treatment ingredients.
There are several limitations in this study. Firstly, the sample size was very small. And some laboratory tests weren't conducted in some patients because the NCOVID-19 patients were from two hospitals. Secondly, there was lack of severe infection, to compare findings with severe infection with mild infection. Thirdly, there was lack of pediatric population.
Contributors
RGao designed the study and wrote the report. DZhao and FYao gathered data and participated in the clinical treatment. ZhLing, YJun, FGuo and HZhao participated in the clinical treatment. RGao, DZhao and LWang performed data analyses. YGao joined in Collating data. All authors contributed to the review and revision of the manuscript and have read and approved the final version.
Acknowledgments
The authors would like to thank the local center for disease control and prevention for the confirmation of NCOVID-19 or NCOVID-19 patients.
Disclosure
The
Conflicts of interests
We declare that we have no conflicts of interest. | 2020-03-12T10:52:27.294Z | 2020-03-12T00:00:00.000 | {
"year": 2020,
"sha1": "9f7a394764af051782a3c906b533a5cfe67f0db2",
"oa_license": null,
"oa_url": "https://academic.oup.com/cid/article-pdf/71/15/756/33538187/ciaa247.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "93faabd42092594c33a127f1f25c1d6fa709a169",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
247908525 | pes2o/s2orc | v3-fos-license | Student Characteristics, Institutional Factors, and Outcomes in Higher Education and Beyond: An Analysis of Standardized Test Scores and Other Factors at the Institutional Level with School Rankings and Salary
When seeking to explain the eventual outcomes of a higher education experience, do the personal attributes and background factors students bring to college matter more than what the college is able to contribute to the development of the student through education or other institutional factors? Most education studies tend to simply ignore cognitive aptitudes and other student characteristics—in particular the long history of research on this topic—since the focus is on trying to assess the impact of education. Thus, the role of student characteristics has in many ways been underappreciated in even highly sophisticated quantitative education research. Conversely, educational and institutional factors are not as prominent in studies focused on cognitive aptitudes, as these fields focus first on reasoning capacity, and secondarily on other factors. We examine the variance in student outcomes due to student (e.g., cognitive aptitudes) versus institutional characteristics (e.g., teachers, schools). At the level of universities, two contemporary U.S. datasets are used to examine the proportion of variance accounted for in various university rankings and long-run salary by student cognitive characteristics and institutional factors. We find that depending upon the ways the variables are entered into regression models, the findings are somewhat different. We suggest some fruitful paths forward which might integrate the methods and findings showing that teachers and schools matter, along with the broader developmental bounds within which these effects take place.
Introduction
When seeking to explain the eventual outcomes of a higher education experience, do the personal attributes and background factors students bring to college matter more than what the college is able to contribute to the development of the students through education or other institutional factors? A long line of work within the fields that study cognitive reasoning and aptitudes would suggest that student background characteristics, especially cognitive aptitude, is important to outcomes not only within college but well beyond it (e.g., Brown et al. 2021;Deary et al. 2007;Schmidt and Hunter 2004), whereas other fields, such as the those that specifically study higher education, emphasize more the role that various factors attributable to the institution might make in the performance and eventual achievement of students (e.g., Light 2001;Stinebrickner and Stinebrickner 2007).
Ultimately, it is very hard to disentangle student and background factors from what a college adds, and, in part, all studies on education depend on the factors and methodological approach a researcher wishes to emphasize (e.g., Huntington-Klein 2020; Schlotter et al. 2011;Singer 2019). Most education studies tend to simply ignore cognitive aptitudes and other student characteristics-in particular the long history of research on this topic-since the focus is usually on trying to assess the impact of education, and so the role of student characteristics has in many ways been forgotten, or perhaps even ignored (Maranto and Wai 2020). Conversely, educational and institutional factors are not as prominent in studies focused on cognitive aptitudes or abilities, as these fields focus first on reasoning capacity (Hunt 2009), and perhaps secondarily on the contribution of other factors.
In this paper, we study student characteristics and institutional factors at the level of institutions to assess whether the findings align with the past studies on cognitive aptitudes and student characteristics, but also to examine the role of institutional or other factors. The aim of our paper is not to clearly adjudicate between the two different perspectives but to consider how our approach provides a way of thinking about this problem in different ways. We first provide a historical overview focused on the role of cognitive aptitudes, as this contribution adds to that line of work, and it also illustrates that this ongoing discussion surrounding what factors matter for education has a thread that can be traced back decades. We emphasize that our view on cognitive abilities and aptitudes is that they are developed and that education is both a product of cognitive aptitudes and can enhance cognitive aptitudes (Hair et al. 2015;Lohman 1993;Ritchie and Tucker-Drob 2018;Snow 1996).
Brief Historical Review Focused on the Role of Cognitive Aptitudes
The importance of cognitive aptitudes for life outcomes has been widely replicated across the decades in numerous longitudinal samples globally (e.g., Brown et al. 2021;Deary et al. 2007). Developed cognitive aptitudes are especially important for learning in schools (Detterman 2016;Snow 1996) and for educational outcomes (e.g., Brown et al. 2021;Deary et al. 2007). Though the typical approach to studying the basic science of cognitive aptitudes is not to consider its role in applied or historical contexts, sometimes it is within such contexts that a greater understanding of how and where basic science may or may not hold is obtained. In this paper, therefore, we examine the role of student cognitive aptitude in education by examining the pattern of correlations and variance explained by cognitive aptitude with educational and occupation-related outcomes, and, conversely, the variance explained by institutional factors. First, we provide a historical review of studies at both the individual and group level-illustrating the replication of findings across the very different disciplinary perspectives of cognitive aptitude research and education policy research-and then introduce our empirical study focused on the aggregate level of colleges and universities.
In a landmark U.S. educational report (Coleman et al. 1966), data were collected on multiple grades and over 4000 public schools on aptitude and achievement test scores of students, along with surveys of schools and students for a total sample of over 645,000. This report uncovered that about 10% to 20% of the variance in student achievement was due to schools, about 80% to 90% was due to student characteristics themselves, and teacher quality accounted for about 1% of the variance (Detterman 2016). This report initiated a national discussion, and much educational research since that time has investigated whether these findings are replicable. This study was on a representative U.S. sample at the time of K-12 students and schools. Do these findings hold for samples at different points in time, for individual and aggregate samples, for studies using different methods, and for studies in different countries? And do these findings hold not only in K-12 education, but also in higher education?
Over the following half century, reviews of the findings of what has come to be known as the Coleman report have largely confirmed them (e.g., Detterman 2016; Gamoran and Long 2007;Jencks et al. 1972). Jencks et al. (1972) replicated the finding that much of the variance in student achievement was due to students, and a 40-year follow up of the Coleman report, which included data on developing countries, Gamoran and Long (2007) found that in countries with a per capita income above $16,000 the findings were replicated.
Using Other Methods: Twin Studies and a Natural Experiment
Other than large sample randomized controlled trials (RCTs), studies of twins are able to account for endogenous factors such as genetics in the estimation of how much of the variance in student achievement is due to students versus teachers or schools in education research (Asbury and Wai 2020;Byrne et al. 2010;Hart et al. 2021). In a recent study examining classroom-level influences on literacy and numeracy among twin samples in the U.S. and Australia (Grasby et al. 2019), the classroom accounted for about 2% to 3% of the variance in achievement. These authors cautioned that although these averaged results may be a lower bound estimation, and that their design could not detect classroom influences at the level of the individual student, their estimate was at odds with much of the global public discourse focused primarily on the influence of teachers and the classroom.
An unusual opportunity for a natural experiment arose in World War II, due to the city of Warsaw, in Poland, being destroyed. The government assigned residents randomly in the newly reconstructed city. Firkowska et al. (1978) collected general cognitive aptitude data (Raven's matrices) in addition to parent education and occupation for most of the students born in 1963 in Warsaw. When breaking down the variance in Raven's scores due to district, school, and family characteristics, the authors found that the variance due to schools was about 2.1%. Thus, this estimate is right in line with the twin studies. Though this was an unusual natural experiment, it should be noted that at least for most rigorous large sample educational RCTs in the U.S. and U.K., these studies tend to find very small or uninformative effects that are typically much smaller than the literature that does not typically randomize (e.g., Lortie-Forgues and Inglis 2019; Sims et al. 2020).
Estimates of the Teacher's Contribution to Student Achievement
Studies using K-12 student-level administrative data in the U.S. on a sample of about 23 million students in the states of Florida and North Carolina across the decade studied (Chingos et al. 2014;Whitehurst et al. 2013) were able to estimate the proportion of variance in student achievement on test scores due to teachers at about 4% to 6.7%, due to schools at about 1.7% to 3%, due to districts at about 1.1% to 1.7%, and due to superintendents at about 0.3%. This shows that-at least when ignoring the contribution of students (and related background factors) to student achievement-teachers appear most influential, followed by schools, districts, and superintendents.
Estimates of Teacher and School Effects Using Methods Focused on Forward Causal Inference
This tendency of education research to neglect the contribution of students to student achievement is probably due, in large part, to the focus of the education research community on what variables they think they can change in the educational environment of the student (e.g., Schlotter et al. 2011;Singer 2019). We should note that, up to this point in our brief review, the focus has been on studies at both the individual and group level that examine the proportion of variance accounted for by students, teachers, and schools. Additionally, we have summarized studies by treating ability and achievement tests as somewhat interchangeable, but there are debates around what is measured by large-scale international assessments such as PISA with regard to cognitive aptitudes versus learning outcomes (e.g., Baumert et al. 2009;Engelhardt et al. 2021;Rindermann 2007). Gelman and Imbens (2013) explained that reverse causal questions are questions about the unknown causes of known effects, whereas forward causal inference requires estimating the unknown effects of known causes. Thus, in the literature reviewed so far, we are estimating the known variance proportions accounted for by student, teacher, and school sources without having a research design that can tell us what are the specific causes. However, forward causal questions would take the form of something like "What is the causal effect of having an effective teacher for one year on students' academic outcomes?" (Wai and Bailey 2021), and to answer such a question policy researchers might use a random or quasi-random assignment of students to different teachers and assess the impact of this on outcomes. Much of the time outcomes are changes in test scores in the short run (for a review see Goldhaber 2015), but sometimes the effects of teachers can persist for years, such as on earnings (e.g., Chetty et al. 2011Chetty et al. , 2014, and the differential effects of schooling environments can also influence short-and long-run outcomes (e.g., Atteberry and McEachin 2020;Chetty et al. 2014;Dynarski et al. 2013;Wolf 2019).
Thus, the approach taken in this paper focused on reverse causal questions, providing some of the fuzzy boundaries around expectations of what teachers or schools might be able to contribute to the eventual achievement of students, but it does not necessarily take away from the utility of teachers and schools in improving student achievement and outcomes within reasonable bounds. The largest threat in most approaches from the economics of education is selection bias, which is even stated by education economists and policy researchers as cognitive abilities (e.g., Schlotter et al. 2011). For example, if students with higher developed cognitive aptitude are selected into a given program, it becomes unclear whether that higher aptitude, the program, or something else is causing later outcomes for those students. It makes sense for integrative understanding to use both or even additional approaches as complimentary tools to understanding the role of students, teachers, and schools in student achievement and what interventions may be cost-effective and beneficial relative to counterfactuals.
Estimates of the Students' Contribution to Student Achievement
Up to this point we have focused on reviewing studies looking at the variance in student achievement accounted for by schools or teachers, but Coleman et al. (1966) estimated that roughly 80% to 90% of student achievement variance was due to students and related background factors (Detterman 2016). What about studies that estimate the student's contribution to student achievement? Deary et al. (2007) examined 13,248 English school children who were tested on The Cognitive Abilities Test at age 11 and took General Certificate of Secondary Education (GCSE) tests around age 15 or 16. The correlation between the academic achievement general factor and the cognitive aptitude general factor was 0.81. Kaufman et al. (2012) examined 2520 participants who took the Kaufman intelligence and achievement tests and 4969 participants who took the Woodcock-Johnson intelligence and achievement tests. The overall average correlation between the academic achievement general factor and the cognitive aptitude general factor was 0.83. Thus, in both these studies in the U.K. and U.S., respectively, general cognitive aptitude accounted for roughly two-thirds of the variance in academic achievement.
Higher Education
So far, we have reviewed findings in K-12 education. But what about higher education? At the individual level, Angoff and Johnson (1990) used a sample of 7954 students from 292 institutions who had taken the SAT and then about a half-decade later had taken the GRE. They used SAT math, college major, and gender and were able to predict 93% of the variance in GRE math scores. This means that roughly 7% of the variance in student achievement could be attributable to the institution the student attended. Additionally, Dale and Krueger (2002) examined the role of the selectivity of the institution in impacting longrun earnings using large samples and controlling for multiple confounders. Overall, once the SAT of the school was accounted for, there was no connection between the selectivity of the institution attended and long-run earnings overall. Taken along with arguments from other scholars that the value of higher education may not be so much about the institution one attends (e.g., Caplan 2018;Wolf 2003), this provides similar findings at the level of higher education as reviewed for K-12 education.
Value-added in higher education. As more attention has been drawn to accountability and transparency in higher education in the past decade, many researchers turn to using the value-added methodology to determine the aspects that higher education may add to economic opportunities (Roohr et al. 2021;Kulkarni and Rothwell 2015). However, there are certain challenges in measuring value-added in higher education, particularly using administrative data (Cunha and Miller 2014). Such challenges include the lack of year-on-year standardized tests, the lack of longitudinal student-level outcomes, concerns of selfselection into college and university, and the mismatch between students' specialization and outcome measures. Cunha and Miller (2014) proposed a simple model to estimate the valueadded of individual institutions that include pre-enrollment characteristics, unobserved differences in student's profile and preferences captured by applications and acceptances, and fixed effects for the college they enrolled in. In our current study, unfortunately, we do not have access to student-level characteristics. Instead, we focus on institutionallevel characteristics.
This Study
For this study, we link the higher education literature with the cognitive aptitude literature by examining the proportion of variance accounted for by students versus institutional characteristics at the level of colleges and universities in the U.S., at least to the extent that standardized test scores such as the SAT or ACT can be used to tap such student characteristics. Before describing our specific research design and questions in more detail, we explain our perspective on the measurement of student cognitive characteristics that helps unify and integrate the findings that have come from various disciplinary perspectives. The key is the measurement of student cognitive characteristics, in particular the measurement of general cognitive aptitude.
Measurement of Student Cognitive Characteristics
The measurement of student cognitive characteristics, in particular through tests or assessments aimed at measuring cognitive aptitudes and their use in the selection of various kinds, has a long history (Binet and Simon 1905;Spearman 1904; for reviews, see Detterman 2016; Thorndike and Lohman 1990). Even as early as 200 B.C., for example, the Chinese arguably selected for cognitive aptitude through the use of Civil Service Examinations, and even today, the gaokao, or national college entrance examination in China, is viewed as a measure of student cognitive aptitude (Li et al. 2012). Though there are multiple cognitive aptitudes, a general working consensus around the hierarchical model of cognitive aptitudes has emerged that recognizes general cognitive aptitude at the apex along with more narrow aptitudes below that (Carroll 1993).
There is also extensive research on the overlap between aptitude and achievement tests, and, in fact, Kelley (1927;c.f. Coleman and Cureton 1954, p. 347) introduced the idea of the jangle fallacy as "the use of two separate words or expressions covering, in fact, the same basic situation, but sounding different, as though they were in truth different," referring to the significant measurement overlap between group cognitive aptitude tests and school achievement tests. Indeed, research has shown that cognitive g and academic achievement g are roughly the same from a measurement standpoint (Deary et al. 2007;Kaufman et al. 2012), that g is measured by nearly any challenging cognitive test with a diversity of tests and item types (e.g., Chabris 2007;Ree and Earles 1991), and that even when test designers intended to measure other aptitudes and achievements, g is uncovered (e.g., Johnson et al. 2004;Schult and Sparfeldt 2016;Steyvers and Schafer 2020). Given this broadly replicated finding, it should come as no surprise to those who acknowledge the body of research on cognitive aptitudes that both the SAT and ACT have largely been found to be measures of g (e.g., Frey and Detterman 2004;Koenig et al. 2008). We should make clear that we are discussing here a very specific, yet central, dimension of student characteristics, that such characteristics can encompass cognitive, noncognitive, and other attributes associated with the student (e.g., Wai and Lakin 2020), and that we view these attributes as developed. As Detterman (2016) puts it, student characteristics can be broadly characterized as things that go with the student when they leave a school, which include aspects associated with income and parental education level (Hair et al. 2015).
Analytic Plan
We build upon this body of work that spans decades and different disciplinary approaches by examining, at the college or university level in the U.S., the proportion of variance accounted for in various college rankings and early to mid-career salary by student characteristics as indicated by SAT or ACT scores, as well as various institutional factors. We draw from two longitudinal databases at two different points in time which measured these factors somewhat differently. The first database was drawn largely from the U.S. News & World Report, along with salary data collected by PayScale. Both sources date from 2014. The second database was drawn from College Scorecard data in 2017-2018 (U.S. Department of Education College Scorecard 2017-2018). Broadly, we seek to examine what proportion of variance student characteristics (as indicated by general cognitive aptitude) account for in typical college and university outcomes, such as rankings and salary, and also to estimate, after cognitive aptitude is taken into account, what proportion of variance in rankings and salary remain for institutional factors to account for among the explainable variance. We also take the flipside perspective and examine the role of what cognitive aptitude adds after accounting for a wide range of institutional factors. We use these two datasets along with Lykken's (1968) approach of constructive replication-the idea of preserving focal constructs in each database but varying construct-irrelevant aspects-to investigate whether findings replicate across the two datasets, and also across the decades of literature reviewed at multiple levels of education.
Data and Analytic Sample
We use two datasets for this study at different time points and measurement of different outcomes to attempt to see if the findings replicate. The first dataset was compiled in 2014 from the U.S. News website using a premium account for full access as well as public data from PayScale. The second dataset was drawn from the College Scorecard database from 2017-2018. This dataset is free and available to access and download via https://collegescorecard.ed.gov/data/ (accessed on 23 March 2022). Table 1 shows each of the comparable variables used in this study, which were purposefully selected to represent student (i.e., SAT or ACT scores) and various institutional factors, of which we discuss how we selected for inclusion in the next section. After matching all observations by university names, we had a total of 1271 universities and colleges in the College Scorecard dataset in 2017-2018, and 883 universities and colleges in the U.S. News dataset.
Variables
Student characteristics. We used average SAT and ACT scores at the institutional level as a proxy for students' average general cognitive aptitude level (e.g., Frey and Detterman 2004;Koenig et al. 2008; see Table 1 for a description of variables). As in prior work (e.g., , for the U.S. News reported scores this average was computed by translating ACT scores to SAT scores using a conversion table and then taking an average of the 25th and 75th percentile scores (what universities report to U.S. News) to create an SAT average for all schools with data. For the College Scorecard database, an SAT average which was already computed was used.
Outcomes. We used average income/salary at early and mid-career points at the institutional level as a proxy for short-term and long-term outcomes of students, as well as university rankings on various measures (see Table 1). College and university rankings are conducted by numerous publications seeking to quantify differences in quality between schools in diverse ways. We drew from rankings data in prior work ) looking at U.S. News national university and U.S. News liberal arts college rankings, a critical thinking ranking (using a measure of critical thinking known as the CLA+), a Lumosity brain games ranking which included data from different colleges and universities whose students had played their brain training games, Times Higher Education (THE) world and U.S. rankings, and a revealed preference ranking (Avery et al. 2013, p. 425), which ranked schools based on "the colleges students prefer when they can choose among them". Income/salary is a clear and objective occupational outcome metric which is often used in evaluating the role of higher education but has also been linked to cognitive aptitudes (Brown et al. 2021;Judge et al. 2010;Schmidt and Hunter 2004). In this study, we could only use the THE U.S. ranking and the Lumosity brain games ranking in our analysis with sufficient sample size (N~200), where both institutional factors and student cognitive characteristics were examined. Institutional characteristics. Our institutional-level variables included data on tuition and fees, admission rate, university resources, cost of attending (including room and board), and diversity (see Table 1). The role of tuition and fees and the overall cost of attending university may matter for students in terms of the time they spend studying versus the time they must work in addition to studying. For example, some studies suggest students that attend colleges with higher tuition are more likely to work while studying (Neill 2015). However, whereas Light (2001) suggested that working while studying yielded higher future wages, Stinebrickner and Stinebrickner (2007) noted that additional study time was associated with higher academic performance. Given that work-study and additional study time may come into conflict with one another, it is unclear how tuition and the cost of attending may affect student outcomes with significant confounding factors coming from students' choice of tracks to complete their degrees (Neyt et al. 2018), in addition to student cognitive aptitudes, which predict numerous long-term outcomes throughout life (e.g., Brown et al. 2021;Deary et al. 2007;Schmidt and Hunter 2004).
School facilities and intellectual resources as well as quality are proxied by endowment, number of faculty, faculty-student ratio, enrollment, and admission rate, though it is unclear whether these resources are crucial for student achievement after graduation (Caplan 2018;Dale and Krueger 2002;Wolf 2003). Some studies suggest that educational expenditure and university resources are modestly related to student learning outcomes for certain groups of students, for example freshmen (Pike et al. 2011;Winitzky-Stephens and Pickavance 2017). Instructor quality might also contribute to student outcomes. Cash et al. (2017) studied the relationship between perceptions and resources of large universities using a multidimensional approach to survey students and instructors, and found that instructors were the key determinant for students' outcomes. In particular, in large universities, to make a class feel small to promote student achievement, the researchers argued effort should be placed on instructor quality and course structure as determined by instructors (Cash et al. 2017). Other university resources, including access to library and electronic databases-which correlate with university financial resources-also have been found to have a positive correlation with student performance (Montenegro et al. 2016).
Researchers have also studied the relationship between classroom diversity as well as diversity courses and students' cognitive outcomes (Roksa et al. 2017;Gottfredson et al. 2008;Bowman 2013). Roksa et al. (2017), leveraging a longitudinal study following three cohorts of students from their first to their last year in college, found that diversity experiences were correlated with student cognitive outcomes, with the correlation being stronger for white students compared with non-white students. Gottfredson et al. (2008) studied 6800 incoming law students in a nationally representative sample and found that classroom diversity had a moderate positive effect on students "openness and enthusiasm to learn new ideas and perspectives" (p. 85). Bowman (2013) studied a longitudinal sample of 8615 first-year undergraduates at 49 universities and found that frequent diversity interactions were associated with gains in students' outcomes including leadership skills, psychological well-being, intellectual engagement, and intercultural effectiveness. However, Martins and Walker (2006) found that students' unobservable characteristics moderated student achievement significantly even when controlling for attendance, class size, peers, and teachers. With the interest in diversity demonstrated in the literature, in this study we used a college diversity index as a proxy for college diversity. This index, on a scale from 0 to 100, was obtained from the Chronicle of Higher Education database (The Chronicle of Higher Education forthcoming) through a membership subscription (https://www.chronicle.com/package/diversity/ accessed on 23 March 2022).
In the College Scorecard data, we had more than 6000 observations. However, there are also significant missing observations in this dataset. For example, among more than 6000 institutions, only 1300 of them reported average SAT scores at the institutional level. Some patterns we observed in this dataset are: (1) the average SAT score is 1060, (2) there is a wide range of admission rates, total enrollment, faculty salary, and cost to attend, for example; (3) the majority of institutions are private-for-profit. For the U.S. News dataset, we faced a less significant issue of missing data. We see that the majority of institutions in this dataset are private-not-for-profit institutions. More details can be found in Appendix A Tables A1 and A2.
Statistical Methods
We used ordinary least squared (OLS) techniques to analyze the relationship between student aptitude, institutional factors, and student outcomes. First, we ran a model with only SAT scores on student outcomes to uncover the variance explained by student characteristics or cognitive aptitude alone (The Tables A1-A3 in Appendix A include the full set of outcomes and results based on the broader sample of colleges and universities not restricted based on institutional factor availability for all cases). Second, we only used institutional variables which accounted for the cost of attending, university types (private, public, and for-profit), locale (urban, suburban, rural, and city), and regions (seven designated regions) in the model to obtain the percent variance explained by institutional characteristics. Third, we included both SAT and institutional factors in our final model. We added controls for university types, locales, and regions to account for plausible differences between types of universities in terms of their internal policies, as well as regional and locale differences that may contribute to variations in institutional outcomes. Our models are as follows: Model 1: outcome i = β 0 + β 1 SAT i + ε i , where outcome i is the respective outcome for school i and SAT i is the average SAT score for that school. Model 2: outcome i = β 0 + β 1 I i + Ω i + π i + ε i , where I i is a matrix of institutional-level variables, as mentioned, Ω is type of university fixed-effects, and Ω is location fixed effects.
Model 3: outcome i = β 0 + β 1 SAT i + β 2 I i + Ω i + π i + ε i , is the combined model from (1) and (2) where we study the joint explained variance by including both the SAT score and institutional variables. Errors are clustered at the state level.
Finally, to study the question of what explains the institutional level outcomes, including the ranking and average salary of graduates at early and mid-career points, we calculated two ratios. The ratio of the two robust R-squared values indicates how much of the respective outcome variance is explained by institutional factors when also accounting for SAT average score and institutional characteristics. We made sure that we used a sufficient sample size (N~200) for three models that included SAT average scores and institutional characteristic variables. We also dropped certain outliers in each faculty's average monthly salary in the College Scorecard data. We dropped outliers by examining the variable's distribution and summary statistics. We dropped observations that were beyond the lower and upper bounds (median +/− 1.5*inter-quartile range). Finally, we dropped data points with zero values in retention rates and admission rates in the two datasets.
In addition, we also included dominance analysis (DA) to determine the importance of independent variables (for further details, see Grömping 2007;Luchman 2015Luchman , 2021. This additional analysis is to provide a picture of what factors contribute the most to our model fit statistic. particularly, DA provides a "theory-grounded method for ascribing components of a fit metric to multiple, correlated independent variables" (Luchman 2015, p. 10).
Results
Tables 2 and 3 present coefficients, standard errors, and R-squared values from OLS regressions. In model 1 and model 3 in Table 2, where the SAT average score at the institutional level was included, the estimated coefficients were statistically significant, indicating that the SAT average score was a statistically significant predictor of students' short-term and long-term outcomes measured by salary. This result was replicated across the two datasets. Similarly, when looking at institutional rankings as reported in Table 3, SAT average scores were also a statistically significant predictor of college ranking. The higher the average SAT score, the higher the institution's score in both the THE U.S. ranking and the Lumosity ranking (rankings are reversed in order). Table 4 summarizes the proportion of variance explained in each respective outcome accounted for by average SAT scores even when accounting for institutional characteristics. Panel A presents data collected from the College Scorecard; Panel B represents data collected from U.S. News. In Panel A, we observe that by using the average SAT score only, we were able to account for 42% of the variation in the average salary six years out and 47% of the variation in the average salary ten years out. The explained variation in salary was smaller in Panel B. By using average SAT scores, we were able to explain 30% of the variance in the early-career salary and 41% of the variance in the mid-career salary. In both datasets, average SAT scores accounted for more variance in the institutions' rankings than students' outcomes. In Panel A, 53% of the variance in the institutions' THE U.S. ranking and 51% of the variance in the Lumosity ranking were accounted for by the change in average SAT scores. Similarly, in Panel B, 56% of the variance in the THE U.S. ranking and 56% of the variance in the Lumosity ranking in the second dataset can be accounted for by the change in average SAT scores.
In Table 4 Model 2 R 2 , we only included selected institutional factors as predictors of students' outcomes and institutional rankings. When comparing the value of R 2 in Model 1 and Model 2, except for the explained variation in the Lumosity rankings using the U.S News dataset, institutional factors accounted for more variation in student outcomes and the THE U.S. ranking than the average SAT scores. In particular, looking at R 2 values for Model 2, in the College Scorecard dataset, institutional factors explained 57% of the short-term salary (six years out) and 64% of the long-term salary (ten years out). In the 2014 U.S. News data, institutional factors accounted for 38% of the variance in the average earlycareer salary and 53% of the variance in the average mid-career salary at the institutional level. In terms of rankings, institutional factors could account for between 75% (in the College Scorecard data) and 75% (in the U.S. News data) of variance in the THE U.S. ranking, and between 59% (in the College Scorecard data) and 52% (in the U.S. News data) of the variance in the Lumosity ranking.
When including both average SAT scores and institutional factors in the model, we observed increases in the explained variance of our outcome measures. We also examined the proportion of variance explained by calculating R-squared ratios. We compared two ratios: . We found that, across the two datasets, institutional factors (taken collectively) appear to explain a greater amount of variation in students' average salary at early-and mid-career points and the institutions' THE U.S. ranking. However, it is worth noting that by using the average SAT alone, we could already explain a large portion of the variation in students' outcomes and institutions' rankings compared to other institutional factors.
Finally, we report findings using the College Scorecard dataset in Table 5 and U.S. News data in Table 6. In our multiple regression model predicting salary outcomes using the College Scorecard data, the top three predictors for six-year salary were: % of students who received a Pell grant, average SAT score, and retention rate (see Table 5 Panel A). For ten-year salary, the top predictors were retention rate, average SAT score, and % of students receiving a Pell grant (see Table 5 Panel B). In the U.S. News data, the top three predictors for early-career salary were average SAT scores, average freshmen retention rate, and endowment (see Table 6 Panel A). For mid-career salary, the top predictors were average SAT score, average freshmen retention rate, and room and board cost (see Table 6 Panel B). n/a n/a 0.011 10 School control n/a n/a 0.000 12 n/a n/a 0.009 9 School control n/a n/a 0.000 12 Robust standard errors clustered at state level. *** p < 0.001, ** p < 0.01, * p < 0.05. Table 6. OLS coefficients and importance of predictors of student long-term outcomes and school rankings using Model 3, U.S. News data. n/a n/a 0.001 10 School control n/a n/a 0.000 11
Looking at rankings, specifically the THE U.S. ranking and Lumosity ranking, we found that the top three predictors of THE U.S. ranking using the College Scorecard data were completion rate, retention rate, and faculty salary (see Table 5 Panel C). For the Lumosity ranking the top three predictors were average SAT score, % of students who received a Pell grant, and retention rate (see Table 5 Panel D). Using the U.S. News data (see Table 6), we found the top three predictors for the THE U.S. ranking were retention rate, average SAT score, and total enrollment (see Panel C); the top predictors for the Lumosity ranking were average SAT score, retention rate, and endowment (see Panel D). Average SAT score and retention rate were the most significant predictors of both student long-term outcomes and institutional rankings across the two datasets.
Discussion
Overall, our findings aligned historically with much of the research on cognitive aptitudes and variance explained in outcomes, even after accounting for various institutional factors. However, this was from the perspective of cognitive aptitudes being the core variable of importance to consider as a starting point. On the flipside, when entering the multitude of institutional factors first into the regression model, these numerous variables collectively accounted for the majority of the variance in outcomes (in most cases larger than the proportion of variance accounted for by test scores alone), suggesting that institutional factors very likely do matter, in addition to student characteristics and cognitive aspects. Of course, test scores such as the SAT are just one short measure that students take prior to high school, so the fact that much of the variance in outcomes is captured by this singular measure should not be underemphasized. At the same time, this analysis illustrates that other institutional factors can matter collectively, and/or the contribution of student characteristics might be obscured or highlighted depending upon which variables one prioritizes in the research design and analysis. Depending upon the ways variables are prioritized and entered into regression models, findings can be quite different.
In the remaining part of this discussion, though we fully acknowledge that institutional factors play an important role in addition to student characteristics, we discuss our findings that link to the historical focus of the academic field focused on cognitive aptitudes, and consider our findings in that broader context, and through the lens of cognitive aptitudes' usefulness.
Limitations
A core limitation of this study is that our research design is in the form to address a reverse causal question where we cannot isolate causes. Thus, we likely have omitted variable biases. However, because one purpose of the study was to determine whether the proportions of variance in student achievement due to students or to institutional factors aligned with the large historical literature going back to Coleman et al. (1966;see Detterman 2016, for a review) at the level of colleges and universities, our approach is appropriate to test whether these findings could be replicated in contemporary U.S. samples. Another possible limitation is that our findings are at the group rather than individual level, and could potentially reflect the ecological fallacy (e.g., Piantadosi et al. 1988), however, Angoff and Johnson (1990) found similar findings as ours at the individual level. Another limitation is that the outcomes we examined were restricted to various school rankings and to salary, which are only a limited set of educational and occupational outcomes. University rankings are an imperfect outcome given that the decision to apportion weights to various aspects is quite variable and reflect the policy decision of the ranker. However, our findings were replicated across many different types of rankings, which reflect numerous weighting formulas (especially see Appendix A Table A1 through Table A3). Additionally, salary is often a core outcome used in evaluating colleges and universities (e.g., Dale and Krueger 2002), and thus the outcomes used are appropriate, but are limited to what we were able to access based on the datasets used. Relatedly, we also have the issue of missing data. Our data were collected from multiple sources that may not adequately synchronize with one another. Therefore, even though at some point we had more than 6000 observations, after running multiple regressions, we were down to 200-300 observations. Our findings, therefore, are not necessarily representative of the broader domain of institutions.
Findings Replicate and Extend Those in K-12 to Higher Education and Also Historically
Despite these important limitations, our findings illustrate contemporary replications across two U.S. datasets at different time points at the level of colleges and universities, with the many studies reviewed in K-12 education, and also historically. Overall, the proportion of variance accounted for by student characteristics as indicated by average SAT/ACT scores or general cognitive aptitude-even after accounting for various institutional factorswas quite consistent across not only typical college rankings but also a critical thinking ranking and a Lumosity brain games ranking (see Tables A1-A3 in Appendix A for the full range of analyses of rankings, excluding institutional factor controls). This suggests that even measures intended to assess supposedly unique constructs such as critical thinking (e.g., Butler et al. 2017) may in fact end up largely overlapping with general reasoning. Additionally, brain games such as those from Lumosity, which were intended to improve cognitive aptitudes, may end up largely measuring a latent learning g factor (e.g., Steyvers and Schafer 2020), which aligns with other research showing that even video games may be measuring cognitive aptitudes (e.g., Quiroga et al. 2015Quiroga et al. , 2019. Given that various rankings, such as U.S. News, only lightly weight SAT/ACT scores in their ranking formula and yet such scores account for the majority of the variance in those rankings suggests that much of university quality may actually be due to student quality at the point of selection (e.g., Dale and Krueger 2002;. Of course, this does not rule out various dimensions of university education or impact, such as brand of degree in helping improve employment prospects, among other factors, but does provide bounds around thinking of the contribution of developed cognitive aptitudes at the point of testing and institutional or other factors and their contributions to long-run outcomes.
The proportion of variance accounted for by SAT/ACT scores or general cognitive aptitude on long-run salary was replicated across the U.S. News and College Scorecard datasets which used two different measurements of salary. Overall, College Scorecard data showed that approximately 47% of the variance in salary a decade after graduation was accounted for by such test scores and U.S. News, and PayScale data showed that approximately 41% of the variance in salary at mid-career was accounted for by test scores. Findings for salary, even after accounting for institutional factors, were consistently replicated across different career time points and datasets, ranging from 72% up through 74%.
Part of Student Outcomes May Be Due to Selection, but Teachers and Institutions Still Matter
In a classic paper, Dale and Krueger (2002) showed that once SAT scores were accounted for, there were no differences in long-run salary for students attending a highly selective school compared to those who attended a less selective school. This indicated the importance of selection on student characteristics-especially cognitive aptitudes (see also Angoff and Johnson 1990). Overall, the findings from this study align with the Dale and Krueger (2002) findings suggesting the importance of cognitive aptitudes before college in predicting outcomes well after college (e.g., Lubinski and Benbow 2020). This also aligns with other literature on selective high schools showing that student selection effects, perhaps more than school quality, may be driving differences in outcomes (e.g., Abdulkadiroglu et al. 2014;Dobbie and Fryer 2014;Dynarski 2018), as well as scholars who have argued that much of the impact of college or university may be attributable to selection (Caplan 2018;Wolf 2003). It appears that cognitive aptitudes remain an important threat to selection bias in forward causal inference approaches, and a more careful consideration of how cognitive aptitudes are important across the lifespan in relation to educational interventions and other policies is in order.
Teachers and other institutional factors do matter (as we illustrated by entering institutional factors first rather than cognitive aptitudes in our models). However, at least from the broad empirical historical perspective of cognitive aptitudes research, how and the extent to which institutional and educational factors can matter is bounded in some ways by this broader pattern of student characteristics, accounting for a large portion of the variance in long-run student outcomes. For example, Chetty et al. (2011Chetty et al. ( , 2014 illustrated that teacher effects can have causal impacts on long-run earnings, and rigorous work on the differential effects of the types of schooling environments shows that institutional effects matter (e.g., Atteberry and McEachin 2020;Chetty et al. 2014;Dynarski et al. 2013;Wolf 2019), which also aligns with our finding when entering institutional factors prior to cognitive aptitude tests. Additionally, a great deal of literature supports the idea that parents' education level, earnings, and social capital are important to the development of eventual student success (e.g., for a summary see Egalite 2016;Hair et al. 2015;Heckman 2000). The wide range of variables we examined in this study may be picking up some of these factors, by proxy. And even though the diversity index, as part of the institutional factors control in this study, did not appear to be a major factor in student outcomes, there may be other values to diversity that are not necessarily quantifiable or achievement-outcome-related, such as simply being exposed to a wide range of people from a unique range of backgrounds and circumstances. More broadly, the resources that an institution holds-such as access to top professors, other highly talented students, opportunities for research, prestige of brand, or alumni networks-can vary widely alongside student cognitive quality, which may serve to further amplify the outcomes of graduates. This may be in part why in the U.S. roughly half of numerous leaders in society have graduated from just a handful of elite institutions and likely, by proxy, have high developed cognitive aptitudes (e.g., Wai 2013;Wai and Perina 2018).
Conclusions and Future Directions
Taken from the lens of cognitive aptitudes as being important, this paper replicated and extended findings in two contemporary U.S. datasets at the level of universities extending decades of research at many levels of education, suggesting that a large portion of the variance in student outcomes may be due to student characteristics-in particular developed cognitive aptitude. When coupled with the large literature showing that general reasoning is related to numerous outcomes across the lifespan (Brown et al. 2021;Deary et al. 2007;Kuncel et al. 2004;Schmidt and Hunter 2004), these findings suggest that across at least the last half century the contribution of students to long-run student achievement has been underappreciated in U.S. education (Detterman 2016;Maranto and Wai 2020), an omitted set of variables in education (Schmidt 2017). This may also highlight the neglect of U.S. education research and policymakers regarding general cognitive aptitudes and individual differences in students across a more comprehensive range of well-studied individual differences characteristics (Lubinski 2020;Revelle et al. 2011). Various cognitive and noncognitive aptitudes might be fruitfully developed by education, but should also be accounted for when helping students receive a differentiated education in schools throughout their developmental trajectory (e.g., Lakin and Wai 2022).
Some fruitful avenues to explore taking individual differences in aptitude into account for more optimal talent or human capital development might be to more carefully examine what aspects of education could improve intelligence (Ceci 1991;Snow 1996;Ritchie and Tucker-Drob 2018), which educational-intervention effects persist and fade out when accounting for intelligence (e.g., Bailey et al. 2020), and differentiating instruction to more closely match individual differences and characteristics of students (e.g., Lakin and Wai 2020). More broadly, this research highlights the need for the approach of asking reverse causal questions to be integrated with the approach focused on forward causal inference (Wai and Bailey 2021), for education economists and policy researchers to pay more attention to the established structure of cognitive aptitudes as a threat to selection bias using forward causal inference tools (Schlotter et al. 2011), in addition to appreciating a broader methodological approach and integration of research evidence which is often found across disciplinary boundaries (e.g., Singer 2019). Ultimately, whether one thinks student characteristics or institutional characteristics matter more is highly dependent upon what research lens and historical evidence one brings to the table in one's sample, research design, and analytical approach. Data Availability Statement: The data was largely drawn from publicly available sources.
Conflicts of Interest:
The authors declare no conflict of interest. | 2022-04-03T15:30:31.393Z | 2022-04-01T00:00:00.000 | {
"year": 2022,
"sha1": "f670787b69971c86df749b18a2d2ef17bff14f6c",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2079-3200/10/2/22/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "77dbf4b937822cd00c653ed7f3b7565111d89da8",
"s2fieldsofstudy": [
"Education",
"Economics"
],
"extfieldsofstudy": [
"Medicine"
]
} |
248581201 | pes2o/s2orc | v3-fos-license | Application of Improved Image Restoration Algorithm and Depth Generation in English Intelligent Translation Teaching System
,
Introduction
NN-based translation methods are based on modeling the translation process using continuous vector representations in neural network (NN). The NN has a strong fitting ability, so the translation model can automatically learn the knowledge in the bilingual parallel corpus. It is worth pointing out that the new NN translation method will also face new problems and challenges that require further indepth research, for example, the interpretability and robustness of the neural network structure, the problem of repeated translation and missing translation, and the problem of good fluency. This requires researchers to continue in-depth research.
The neural machine translation (NMT) utilizes currently emerging deep learning techniques. It extracts the features of text vocabulary by building a deep and complex NN and uses end-to-end NN technology to achieve intelligent conversion from one natural language to another. In terms of the theoretical value, research on machine translation plays a benchmarking role in natural language processing, and research on machine translation can drive the development of other fields. The progress and development of NMT technology research has strongly promoted the vigorous development of tasks in other fields (such as sentiment analysis, text classification, and dialogue generation) of natural language processing. And different from other research topics using deep learning technology, machine translation can be said to be a subject at the level of human cognition. Its indepth research on it is also the promotion of the combination of cognitive science and AI science.
The innovation of this article is as follows: (1) This paper proposes a scene-universal data augmentation method. It combines the low-frequency word replacement method and the reverse translation method and adds a grammar error correction module. It achieves effective data enhancement in both resource-rich and low-resource scenarios. (2) On the basis of multi-granularity feature fusion as input, this paper adds dynamic word vector embedding and conducts a comparative experiment with static word vector embedding.
(3) It proposes a data augmentation method for unsupervised neural machine translation that utilizes the robustness of statistical machine translation to alleviate the problem of data noise.
Related Work
Starting from the influence of cultural context on Chinese-English translation, Zhang discussed the context of Chinese-English translation, understanding and practicing translation activities from the perspective of cultural translation, and practical experience [1]. The teachers play an important role in the educational process. Balla has attempted to highlight some of the most important roles that English teachers play in the challenging teaching process. The concept of the ideal teacher, in his opinion, does not fit into a single concept because many factors must be considered. English teachers must take ownership of the subject and encourage students to participate voluntarily. He should not only be knowledgeable about the subject, but also be able to interpret it [2]. Embedded software development and early testing are greatly aided by virtual platforms. Lora proposed a "middle meat" approach to virtualizing heterogeneous systems [3]. The network of NMT encoders and decoders iterates on the network of word multi-semantic encoding, according to Choi's research. According to the context defined in the source sentence, he clarified the source and destination words. The context takes a lot of energy to create. In addition, special points (numbers, correct nouns, acronyms, and so on) should be entered to facilitate the translation of words that are not suitable for continuous vector translation [4]. According to Wu's recent research, grammar knowledge can significantly improve NMT performance. The target translation as well as each dependency tree are built and modeled together in this case. The tree structure is used as a framework in decoding to make word creation easier. Finally, to create an NMT dependency model and implement a Transformer dependency-based framework, a grammar encoder is used to extend the dependency sequence [5]. Miura's triangulation method, which combines source-intensive and centralized target translation models into a single target source model and is known for its high translation accuracy [6], combines sourceintensive and centralized target translation models into a single target source model. According to Choi's research, neuronal machine translation (NMT) has emerged as a new type of machine translation, with the attention mechanism serving as the primary method [7]. Kim proposes a neural network (NN) architecture for detecting statistics that combines surface windows and syntactic context in a single-language syntactic word representation. Two language pairs and two tasks are used to test the method: detecting grammatical errors and predicting the entire task after processing. His proposed neural network [8][9][10] (NN) architecture is forward-looking, but it still has a lot of room for improvement.
Improved Image Restoration Algorithm and Depth Generation Algorithm
3.1. The Teaching Environment of English Intelligent Translation. The intelligent translation teaching constructed in this study consists of elements such as students, learning communities, teachers, and educational resources. Smart classroom is an educational system that integrates educational software and hardware, diagnosis, analysis, and other services by using software and hardware, network, and other technical means. The intelligent translation classroom includes computers, HiTeach interactive learning system, low-focus projector, Haboard interactive whiteboard, physical reminders, HiTA intelligent teaching materials, and IRS instant feedback device learning software. The English intelligent translation teaching environment is shown in Figure 1. As shown in Figure 1, the HiTeach interactive teaching system can be installed on the teacher's computer to realize functions such as selection, synchronization, and response. Haboard interactive whiteboards are useful for education with touch-controlled teaching assistants. Physical reminders can display student learning outcomes, such as homework. HiTA intelligent teaching assistant makes it easy for teachers to take pictures, upload pictures, and synchronize courses. The IRS instant feedback system includes many remote controls for students. Recipients can use the IRS direct feedback device to answer multiple-choice questions to computers and screens. Teachers can receive information from students in a timely manner. To sum up, the research-based intelligent educational environment provides a variety of technical conditions for English listening and speaking teaching. It includes voice environment, differentiated interaction, and timely feedback [11]. The interactive mode of English translation listening and speaking teaching is shown in Figure 2.
The relevant research on English translation education is summarized in Figure 2. The importance of a multimedia environment is emphasized in English listening and speaking education, as is the importance of a standard environment for pronunciation, feedback, reaction, and differentiation. To improve students' listening and speaking abilities, English translation instruction should focus on the pronunciation environment, feedback, reflection, and differentiated evaluation [12]. Supporting and regulating students' pronunciation, providing feedback data, providing a basis for reflection and evaluation, and providing technical conditions for optimizing English listening and speaking education are all functions of the intelligent educational environment.
NN Machine Translation.
In recent years, self-attention networks [13] have attracted much attention due to their flexibility in parallel computing and modeling. Current neural network machine translation models use stacked selfattention and fully connected layers for the entire model part. The output matrix calculation formula is Among them, 1/ ffiffiffiffiffi d k p refers to the scaling factor of the dot product, and the self-attention mechanism and multihead attention mechanism (MHAM) are shown in Figure 3.
As shown in Figure 3, the Transformer model has three attention mechanisms. It includes an encoder MHAM, a decoder masked MHAM, and an encoder-decoder MHAM [14]. The MHAM can be calculated by the following formula: 2 Mobile Information Systems It thinks from a probabilistic point of view, and a given sequence of input sentences to be translated is X=fx 1 , x 2 , ⋯, x jxj g. The goal of the Transformer is to gen-erate the target translation Y according to the conditional probability defined by the NN: where Y <i =fy 1 , y 2 , ⋯, y i−1 g consists of the first i-1 words of the sequence Y, and j·j represents the length of the sequence. The standard decoding algorithm adopted by Transformer is BeamSearch [15]. That is, at each time step i, the following formula is used to obtain the translation probability, and finally n best translation candidates are retained: In order to scale the similarity value in the [0, 1] interval, it uses the edit distance to calculate the fuzzy matching (Fuzzy Match (FM)) coefficient between two sentences. Its formula is as follows: Among them, Levenshteinðs, tÞ represents the edit distance between strings s and t and j·j represents the number of elements. A larger fuzzy matching coefficient indicates a greater degree of similarity between two sentences, and the fuzzy matching coefficient is between 0 3 Mobile Information Systems and 1 [16]. According to the FM value, the bi-sentence pair with the highest degree of similarity can be selected from the translation memory: where khk represents the size of the vector h. The larger the EM value, the greater the semantic similarity between strings s and t [17]. In practice, there is a situation where two sentences are not very identical in terms of basic units, but semantically describe the same thing. Therefore, using the semantic similarity calculation method can pick out these similar sentences, but using the string-based method cannot. Therefore, in practical applications, the similarity calculation method needs to be selected flexibly.
For the source language sentence X to be translated, it first retrieves a set of source language sentences and corresponding target language translations from translation memory using an off-the-shelf search engine. It then obtains the translation memory list fðX m , Y m Þjm ∈ ½1, Mg. Then, it calculates the similarity of X sum according to the following formula: Second, translation fragments are collected from translation memory lists [18]. It collects translation fragments (accumulating up to 4-grams) from the retrieved target sentences Y m as possible translation fragments G m x of X [19,20]. Translated fragments from translation memory G x are represented as where G m x represents all n-grams collected from <X m , Y m > (n accumulates up to 4).
Then, a weighted score is calculated for each segment u∈G x . The weighted score value of segment u is based on the similarity between the TM source language sentence and the input source language sentence. It measures the likelihood that the segment u belongs to the source language sentence X to be translated. The larger the value, the more likely it is to be a correctly translated fragment [21]. Specifically, the final weighted score of each u is calculated by the following formula: Therefore, the translation segment reward will be calculated according to the following formula. It is then added to the output layer of the NNMT model: Among them, λ is obtained from debugging on the development set, and σðcond, valÞ is calculated by the following formula: Finally, in the output layer of the NNMT model, the updated translation probability of the words in the translation vocabulary is Mobile Information Systems In summary, in the method of using translation fragments to guide NNMT decoding, it gives an additional reward when decoding the output for the words contained in the translation fragments collected from translation memory.
Position-Sensitive Translation Memory Fusion Methods.
To capture contextual information or long-range knowledge, a normal distribution is employed to represent the relationship between locations [22]. In this paper, the most similar translation memory instance <X m , Y m > is used to learn the word position distribution parameters at the sentence level. Specifically, for target word y i and translation target position i during decoding, the sentence-level corresponding position score s ps is calculated by the following formula: In the formula, i ′ refers to the position of the word, where y i ∈ , Y m , simðX, X m Þ represents the fuzzy matching (FM) value of X with X m . Then, the following formula is used to calculate the sentence-level position reward value: where δ(cond, val) is calculated according to the following rules: If cond is true, δ(·) takes the value val; otherwise, it takes the value 0. In this way, the NNMT model captures sentence-level positional information. It allows the source language sentence to obtain more contextual information of the translation segment at each decoding time.
The fragment-level location information helps the NNMT model to further capture local information. Similar to the reward sentence-level location information above, it is the word y i in the collected translation segment u. The reward value for its segment-level position n (0 ≤ n ≤ 3) is calculated using a simple standard normal distribution: Therefore, it uses the following formula to calculate the additional fragment-level position reward value: In summary, at each decoding instant i, the translation probabilities of the vocabulary in the output layer are updated. This increases the output probability of those words that match the expected position: It uses p b to represent the translation probability of word w b . The word-to-word transition probability p ab (e.g., from w a to w b ) is calculated by the following formula: where Nð·Þ denotes all cases satisfying (w a , w b )∈u. Therefore, the value that word w should reward can be calculated by the following formula: Second, for the position chain in the double-chain graph, in each decoding step, the reward value is calculated according to the algorithm. Then, the updated reward value is calculated according to the following formula: where loc w i−n −t represents the position of the word w i−n in the translation memory target language and t represents the current decoding time.
Design and Implementation of Translation Teaching System
The continuous development of NMT meets the needs of social progress. The only way to make NMT technology better serve human beings and provide convenience and create value for people's lives is to put it into practice and realize the real technology landing.
System Architecture.
In order to put the theoretical method proposed in this paper into practice, and to ensure that the system can be easily updated and maintained after the completion of the system, the system strictly complies with the requirements of functional modularity in the design stage. To achieve maximum decoupling between functions, each module is organized in a hierarchical order to facilitate functional collaboration between modules. The overall architecture of the translation teaching system is shown in Figure 4. As shown in Figure 4, the core service layer is in the middle. It is the core service logic of the entire machine translation system. The core service layer saves the most recent machine translation model as well as model-related configuration files like the trained word vector model and vocabulary and can use the model to generate translation results for the lower layer. The service layer can parse out the sentences to be translated and process requests from the interaction layer. At this stage, it also filters the request content. It will return a corresponding response for invalid requests, such as empty requests, or illegal requests, such as input in languages other than Chinese. The service layer will then schedule tasks in a reasonable manner based on the server's 5 Mobile Information Systems resources. It completes the request by implementing the content translation as quickly as possible.
Functional
Modules. The overall architecture design of the system mainly provides conceptual guidance for the realization of the translation system. Functional modularity plays a crucial role in the specific implementation details of the translation system. This system divides the different levels of the system architecture into specific functional modules from the perspective of being convenient for users to use and for system administrators to maintain and upgrade the system. The clear division of functional modules makes the translation system more convenient in both the early development stage and the later maintenance stage. The basic principle of the task buffer queue and the firstcome-first-served strategy in the task scheduling module is shown in Figure 5.
As shown in Figure 5, such a mechanism avoids the phenomenon of server crash caused by insufficient resources due to the excessively high number of simultaneous requests on the server side. Reasonably setting the size of the task buffer queue is beneficial to improve the resource utilization of the server and reduce the average request response time of the client. The cooperation logic between the modules is shown in Figure 6.
As shown in Figure 6, this module is responsible for uploading a copy of the latest model trained by the model training module to the core service layer for the translation service. The internal functional modules are clearly divided. It is also organized hierarchically for interaction and collaboration between modules.
Due to the lack of supervised information, pseudotraining data produced by unsupervised NMT models suffer from a lot of noise and low-frequency translation errors. And these errors are continuously amplified and reinforced To solve this problem, statistical machine translation is introduced as a posterior regularizer to denoise the pseudo-training data. This allows these errors to be eliminated in time to improve translation model performance. The initialization process of the translation teaching model is shown in Figure 7.
As shown in Figure 7, the whole training process is mainly divided into two stages: model initialization and using statistical machine translation as the posterior regularization. In the first stage, language pairs X and Y are given. It first builds a bidirectional initial statistical machine translation model using a language model trained on monolingual data and a translation table inferred from cross-lingual word vectors. Its statistical machine translation model will then be used for translation of monolingual data. In this way, pseudo-training data can be generated to initiate a bidirectional NMT model. In the second stage, statistical machine translation and NMT models are iteratively updated in a unified EM training framework. In this iterative process, the NMT model is trained not only with the pseudo data generated by the statistical machine translation model, but also with the pseudo data generated by the reverse NMT model translating the monolingual data.
Experimental Analysis of Intelligent
Translation Teaching System 5.1. Transformer Model Parameter Settings. It applies reinforcement learning algorithms in an end-to-end neural network machine translation system based on the Transformer model. The parameter settings of the Transformer model are shown in Table 1: As shown in Table 1, the Adam optimizer is used, and the β coefficients are selected (0.9, 0.98). This experiment is trained using 3000 tokens as a batch. Each sentence has a maximum of 1024 words, which are discarded in the decoder and encoder hold parts. To preserve the temporal information of the sentence, it embeds the position into the input of the encoder.
Performance Comparison of Reinforcement Learning
Algorithms. This chapter introduces the idea of reinforcement learning and deep reinforcement learning algorithms into an end-to-end NNMT architecture. It has built ten end-to-end NNMT systems based on reinforcement learning algorithms based on CNN and Transformer models, respectively. The performance of different reinforcement learning methods in the machine translation model is shown in Table 2.
Machine translation systems that use reinforcement learning outperform baseline systems using CNN and Transformer models, as shown in Table 2 Figure 8. It can be seen from Figure 8 that in the early stage of training, the machine translation model based on CNN reinforcement learning will have a negative value when calculating the reward value, causing a large deviation and making the overall convergence of the model worse. The model stabilizes over time. At the same time, during the experiment, the loss function of the model was recorded as the process of decreasing with the training batch. The Transformer model and Transformer+ part of speech information model are shown in Figure 9.
As shown in Figure 9, on the training set, the loss function of the Transformer model with the part-of-speech information vector is slightly faster than the Transformer model without the part-of-speech vector. The final convergence effect is also slightly better than the former. This also shows that the training effect of the Transformer model after adding the part-of-speech vector is better than the original model.
Conclusions
Because AI has a significant impact on both modern education informatization and the balanced development of education in the information society, finding applications should begin with theoretical and practical education. In light of the current challenges, this paper is oriented to the field of NMT, focusing on two aspects of data sparseness and model improvement. This paper proposes a combined method to address the limitations of existing data augmentation methods. The results of the experiments show that the combined method can effectively increase the training corpus, thereby improving translation task performance. This paper builds an NMT model with multi-granularity features and dynamic word vector embedding to improve model performance. Both multi-granularity feature input and dynamic word vector embedding can improve the performance of the translation model, according to ablation experiments, and the combination of the two has the best effect. The number of training times and the size of the training data are insufficient. Due to hardware limitations, it is unable to select training data from a large amount of data. It is impossible to set too large training times, data dimensions, or other parameters. This may have an impact on the improvement of modelability.
Data Availability
The data used to support the findings of this study are available from the corresponding author upon request.
Conflicts of Interest
The author does not have any possible conflict of interest. | 2022-05-10T15:17:15.279Z | 2022-05-06T00:00:00.000 | {
"year": 2022,
"sha1": "ac9a9b1d755ee931161ac84cfb5b66834fc531d3",
"oa_license": null,
"oa_url": "https://downloads.hindawi.com/journals/misy/2022/7398929.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "1aead61457969de681e2c6008bfb7a96a2eea05f",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": []
} |
135395835 | pes2o/s2orc | v3-fos-license | Urbanization and the Resulting Peripheralization in Solo Raya, Indonesia
Dynamic urbanization in Solo Raya, a local term for Surakarta Metropolitan, amongst rapid regional based-urbanization in Indonesia, shows the unbalance pattern of growth. A number of Surakarta City’s peripherals become the newly growing area which is characterized by a well-facilitated region, while the former urbanized areas next to the city center present the declining process. Different socioeconomic development triggers a unique mosaic of socio-spatial pattern, on which the phenomena of peripheralization could be investigated. Urban investment that boosted by the political will of both the national and local government has led to a shift in demographic condition. A relatively massive in-migration has been attracted to the peripheral and creates the new landscape of urban-rural society. Complex dynamic of metropolitan growth and the resulting peripheralization reminds that socio-spatial pattern calls the challenges for managing the rapid change of land use and space use. The pattern of urbanization that differs upon the surrounding areas of Surakarta City would be interesting to be explored. This paper will discuss the conceptual framework of peripheral urbanization and the methodological approach. It is actually the part of ongoing research on peripheralisation in Solo Raya.
Introduction
Economic growth and investment have fueled the rapid development of small towns and surrounding rural areas. Changes in social structure and spatial patterns in the suburbs signal the emergence of new service centers [1]. The growth of urban areas has penetrated the agricultural area and changed the social and economic characteristics of rural areas [2][3][4]. In some parts looks very fast development, while the other part grows slowly. Meanwhile, the old city area that has evolved earlier shows the dynamics of the decline. The decline is characterized by a decrease in the population. On the other hand, marginal areas show symptoms of population growth. Improving the quality of infrastructure and completeness of facilities is another fact that marks developments in the periphery. In certain areas even show symptoms of increasing service scale very rapidly. The region grew into a new growth center and created a multi-core pattern on a regional scale. Urbanization on a regional scale is becoming a contemporary phenomenon that flourished in Asia. Fisher [5] and Ainsaar [6] studied the pattern of suburban development and built typology to illustrate the differences in social and spatial dynamics.
Differences in the dynamics of development that lead to differences in space growth and the level of investment lead to the emergence of differences in the level of progress of a region. The gap between center and periphery begins to change shape. New areas of development on the periphery tend to be better, while some parts of the central region are reversed. The term periperalization is used to explain the phenomenon of regional inequality or inequality. The focus of the study is more on social and spatial aspects [7]. Research on peripheralization, as Lang [8] has done in some countries in Eastern Europe, is often associated with advanced region dichotomies that are considered to win competition and backward loses. The political change from socialism to capitalism has triggered a major change not only in the social aspect but also in the space aspect. Physically, progress is more often associated with the rate of development of physical construction and socioeconomic facilities. While socially associated with changes in the economic structure of the communist to liberalism. In subsequent developments, referring to Tahir and Naumann [9], the dependence of the periphery to the center gradually declines, and even the population tends to move from the central to the periphery. In this context, the central dominance of the periphery has shifted the European countries In the context of urbanization and metropolitan development in Indonesia, periperalization occurs due to the rapid growth of rural areas. The availability of land becomes one of the considerations of the development of newly built areas on the outskirts of major cities. In some areas there is still the dominance of big cities on their new growth areas, but some dynamics show that the periphery is growing faster. Despite the decline in agricultural sector contribution, as explained by Hudalah [10], the rapid development driven largely by private investment has led to an increase in the role of small towns in the periphery. The urban service center began to shift from the big city to the periphery. Among the new growth areas, there are even more developed beyond the center of the city, as happened in Sukoharjo, one of the neighboring districts of Surakarta. The dynamics of urbanization and the emergence of symptoms of periperalization becomes an interesting issue to study. The symptoms illustrate how the role of small towns grow and vice versa the dominance of big cities is declining. The marginalization initially experienced by the periphery gradually shifted to the central region. The displacement of the central population has had an impact on the declining quality of service in the central region. Parts of the central region began to lag.
Observations on socio-economic changes and spatial areas can be linked to assess the adaptation and resilience of local communities in responding to urban development. What are the factors that influence people to stay in areas that have changed due to the influence of urban growth and merge into the urban socio-economic culture. And what makes the local people to shift into the interior and maintain their socio-economic culture.
The Impact of Urbanization on The Development of The Periphery
The development of investment spur the growth of urban space. The best-developed periphery areas are growing first. Economic and demographic changes encourage social shifts and spatial patterns. The entry of middle economic groups in rural areas has triggered social polarization. The dominant peasant communities are marginalized and occupy deeper areas. Some of them remain among the new settlers and create social hybrids that have not fully blended. Fragmentation of space according to economy class becomes a new character in the peri-urban and sub-urban areas. Changes in social, economic and spatial dimensions vary depending on the socioeconomic and cultural characteristics of the community and the politics of local government development. Agglomeration of metropolitan areas presents variations in structural change. The visible differences reflect the socioeconomic and spatial attractiveness of the region (see Norgard [11] and Ravetz et al. [12] ).
The phenomenon of peri-urbanization and sub-urbanization has been the focus of several studies. The shift in development to the periphery marks the contemporary urbanization that is spreading in third world countries. The suburbs are the main target of middle-class migration in the city. This phenomenon by Ainsaar [6] is called deconcentration, namely the process of migration of population to areas of lower density. Social and economic factors behind the migration of the population to the suburbs [13]. In the context of investment, the availability of sufficient quantity of land at an economical price is a consideration for investors to penetrate the area. Some researchers focus on the classification of urbanization types (see Ainsaar [6] and Fisher [5]), and others focus on the migration process. Champion [14] observed the migration process and the concentration of the population that occurred in the periphery. He named the process as concentration diffusion. In his research in some peri-urban areas in several states in Australia, Fisher [5] divides urbanization typology into four types, namely Sub-Urbanization; Counter Urbanization; Population Retention; and Centripetal Migration.
Sub-Urbanisation Dynamics
The term sub-urbanisation is used to name a new region whose proximity and dependence on the central area is still high. This region generally has a high commuting level. Sub-urban areas are generally new settlements for the city's established groups. The group moved to the outskirts to get a better, more natural and better residential neighborhood and still have a high degree of affordability from the city. This area is generally developed by utilizing existing road access. In the Indonesian context, the tollgate entrance is a potential area for sub-urban development due to the ease of achievement. In a short time this area is packed with high-rise settlements. Groups living in the region are generally young professional groups and young families looking to find land that is still relatively cheaper than in the city, but has a high level of ease of achievement. The tendency of suburbanization at the same time trigger land issues is speculation. Areas along the main corridor are subjected to speculators to control large amounts of land.
Counter Urbanization Dynamics
This dynamic is associated with a phenomenon in which the migration to the periphery occurs in an area relatively far from the city. The linkage or dependence on the city is also relatively low. Target areas are generally rural areas that are still dominated by green land with a moderate level of ease of achievement. Access is no longer a major factor, but rather a complete facility and rural atmosphere that offers a natural environment. The migrant groups that make this choice are generally established families or retired retirees. The economic establishment encourages them to live in relatively beautiful places and opens the opportunity to have large enough land. The countryside atmosphere is the main attraction because they are looking for a relatively quiet location and quite far from the noise of the city. Some of the migrants in this area have deliberately started a new life by opening a business in an area far from the city. Generally are those who have been at relatively high career levels in some corporations. The existence of this upper middle class makes the region grow and open up new job opportunities for local communities.
Population Retention Dynamics
This term is used to characterize areas that are still physically dominated by rural culture. This area is clearly located at a relatively far distance from the city. In the aspect of population, this area is characterized by the presence of local people with a relatively equal proportion or greater than the group of immigrants. Local people choose to survive in this region due to adequate economic development. These economic developments were triggered by the presence of established urban migrants who opened businesses and offered opportunities for local people to work in the administrative, factory, maid, security, hotel servants, plantation workers as well as other informal sectors such as informal transporter, tailor clothes, construction workers, and others. In this region, the completeness of facilities is not the main requirement. It is precisely the nature of the rural atmosphere to be the main attraction. In this region, it usually develops tourism and agribusiness sectors. The existence of businesses that generally rely on rural potentials opens employment opportunities for local communities to increase incomes. In addition, vocational education is also evolving, especially relevant to emerging sectors. This indirectly encourages the awareness of rural communities to improve skills and education. This region is generally within a certain range of counter-urbanisation areas, as some areas prove that the presence of counter-urbanisation areas pushes the surrounding area into a population retention.
Centripetal Migration Dynamics
This term is used to describe the dynamics of the region where people around the area are interested in migrating to the region due to certain attractions. Community interest dominated by the village community is generally driven by the condition of less fertile villages, limited economic potential, and inadequate social facilities. While they are interested in migrating is the development of medium or large scale industries, the construction of trade facilities and services, the development of tourism objects and others that offer employment opportunities. This phenomenon is generally also a further process that is preceded by population retention. The development of a rural area due to the migration of established economic groups and investors in turn will open up new economic opportunities. The area is the target of migration for people living around it. Therefore, population retention in subsequent developments push the region into a new growth center that will evolve into economic independence. This possibility is particularly the case in areas experiencing rapid growth due to large-scale investments, such as the development of industrial zones or special economic zones fueled by relatively cheap land prices [10]. In time, the region will develop into a new, self-contained city that will become the forerunner to subsequent sub-urbanization and countermeasure of urbanization. Thus it can be observed that the process of urbanization is a continuous cycle that evolves from one place to another continuously, where agricultural activity is increasingly shifted and replaced by the urban economic sector, ie industry and services [15].
Peripheralization as a form of imbalance development
Peripheralization is not always associated with peri-urban. This terminology is used to describe the condition in which an area is inequality compared to other areas in the vicinity. Peripheralization is not determined by distance or accessibility, but to indicate the difference in the degree of development of a region and the degree of backwardness of one region to another [16]. Other experts review from a socio-political point of view. Peripheralization is associated with discrimination of certain groups or communities in a region [7,17]. The factors behind the peripheralization are quite varied, each expert has a different point of view.
Weck and Beisswenger [1] examine how migration triggers socio-economic changes that result in peripheralization. The economic dynamics of migration is generally the beginning of how a region develops. Economic opportunities will attract migration. In subsequent developments, migration to a region will affect the development of the region. Migration by the upper middle class will be followed by significant developments due to investment. Construction of new facilities and infrastructure will occur in the region. While the abandoned areas, although located in the downtown area actually decreased quality. In this case, migration has a central role that determines the development of a region as well as the decline of a region. The conditions under which the dynamics of development and decline occur in an area called peripheralization. The peripheral region in this case is characterized by declining population, especially productive groups and the stagnation of the construction of facilities or infrastructure. This phenomenon is one of the dimensions that characterize ubanization and urban growth. Michelini and Pintos [18] examine the dynamics of peripheralization occurring in some regions of Latin America. Their research proves that peripheralization occurs due to seizure of land access and facilities. Disadvantaged communities will suffer from lack of access and completeness of facilities.
Based on his research on several countries in Eastern Europe, Lang [8] concluded that peripheralization is caused by several things, namely: concentration of facilities in certain areas that result in interest in populations from other regions that ultimately lead to setbacks in the abandoned areas; a decline in the rate of economic growth resulting in increased dependence of a declining region to a more developed region; development policies that prioritize metropolitan areas; development of infrastructure that is only concentrated in certain areas that trigger the gap. Broadly speaking, the results of Lang's study were confirmed by Kuhn [16] which explains that peripheralization is a form of regional disparity due to economic polarization, different strata of social development and weak political power and the bargaining position of government or community that result in the decline of a region.
Research Methods
The study methodology study will cover three things: research approach; data retrieval methods; and data analysis methods. In general, the approach applied in this research is phenomenology approach. This approach is chosen because it is considered the most appropriate approach to understanding the phenomenon of urbanization and observes the process of peripheralization. With phenomenology approach, it is expected to get a detailed and deep understanding of the patterns of urbanization that occur and the dynamics that make up the periperalization. The objective of the research is to observe the urbanization pattern in Metropolitan Surakarta and the peripheralization due to the development of urban area. Survey method will be conducted to dig secondary and primary data. Some speakers will be selected by snow balling to complete the primary data. The collected secondary and primary data were analyzed both quantitatively and qualitatively. Population data will be observed for the period 1980 to 2016. This is intended to see the dynamics that occur, from the beginning of development to the present.
The research area covers Solo Raya area, namely urban agglomeration area of Surakarta city. Geographically, the city of Surakarta as a metropolitan center is surrounded by six districts of Sukoharjo, Boyolali, Karanganyar, Wonogiri, Sragen, and Klaten. Among the six districts, agglomeration of urban areas due to the development of Metropolitan Surakarta occurs only in Sukoharjo, Karanganyar and Boyolali districts. Thus, observation of urbanization pattern will be done in all three districts. Fisher typology described above will be a reference to see the patterns of urbanization that occur. From these three districts, will be selected kecamatan that shows the impact of urbanization development. Each sub-district will be examined from several aspects, namely social, economic and spatial changes. The socio-economic transformation will be tested for the impact on changes in its space structure. More specifically, the changes to be observed are the level of urban intensity that occurs. Rural socioeconomic changes leading to urban areas will be observed to the extent that they alter their spatial order. The socioeconomic characteristics of the community, both local communities and migrants in each new urban area are analyzed to assess whether there is a relationship between socio-economic characteristics and the characteristics of space. To understand the contrast, regions with high degrees of change from village to city will be compared with low degree of change.
Based on research conducted Nuriasari [19], the influence of Surakarta urban can be observed up to a radius of 8 km from the city of Surakarta. In this study the distance will be re-tested to ascertain whether the influence of the urban is still the same or widened in the area further away from the city of Surakarta. Observed urban influences, as described earlier, include aspects of population, land use change, and socio-economic structure change. In the three selected districts will be seen which represents the highest and lowest intensity of the urban. This is done to test whether the effect of urbanization occurs evenly or not. If uneven, it is possible that a kabpaten will show a high intensity of change while the other districts do not show the same thing. This dynamic will certainly provide a more detailed explanation of how the distribution of the effects of urbanization and what factors lie behind the different levels of urbanization intensity. Sub-districts to be observed in this study include (see Figure 1) Baki, Gatak, Kartasura, Grogol, Mojolaban (Sukoharjo) District; District Colomadu, Gondangrejo, Jaten (Karanganyar District); and District Ngemplak (Boyolali District). Data analysis in this research will use descriptive analysis method. The assessment results of each aspect derived from the literature review will be assessed and measured. Data to be analyzed include (i) land use data, (ii) population migration, (iii) demographic data, (iv) economic data, and (v) availability of basic facilities data. Spatial analysis to see the dynamics of land use will use data from satellite imagery. The data is then analyzed by overlay method with the help of Geographic Information System. Spatial analysis will use the series data in the form of land use map from 1980 to 2016. It is expected to observe the spatial development stage.
Meanwhile, the analysis of peripherals will also use data relating to economic, social and infrastructure aspects and public facilities. Scoring method will be used for data processing. The size of the score shows the high low inequality that occurs. In the scoring analysis, the value to be given for each indicator is as follow.
Conclusion
According to the former researches, peripheral development becomes the contemporary form of urbanization. The different pattern of development in the peripheral regions of the city expresses the particular characteristic in which the rural-urban transformation could be investigated. The distinguish pattern of spatial change shows how the economic forces has driven and how the interplaying role of development stakeholders have been bargaining. The trade-off between public and private interest implies the different impact on physical dimension. A number of studies realize that social and cultural aspects still play as the determinant driver. In additional, the characteristic of urban society that moving into the peripheral regions will create the different kind of social integration. In this matter, we can see that in some extent the change bring the betterment of living condition, but in other situation the change brings the unbalance growth. The expansion of urban infrastructure triggers the peripheralization, in which the former neighborhoods are sometimes decline while the new neighborhood tends to be predominated.
The dynamics of rural-urban integration in the surrounding area of a primate city shows the four different pattern of change that represents the different level of urbanization. The phenomenon of suburbanization performs how the regions nearby the city have been strongly transformed into the new urban areas. Meanwhile, the counter urbanization explains how the emerging of new rural-urban region represents a kind of less dependent area to the city centre. These areas become the new destination of migration in which the inhabitant, especially the former urban society has decided to engage with the new livelihood. In the respective period, these regions are continuously developing its urban characteristic.
The gap between urban and rural that so called peripheralization which is strongly expressed in suburban becomes less intensive in these regions. The other is called population retention, that characterized by the resilient rural area that experience the new opportunity of economy due to the influx of urban middle class into this sites. It can still be seen the existence of farmer household cultivating their land while the family member has engaged with the new urban jobs. This kind of additional income has increased their living standard. Last but not least is the centripetal migration. This type of urbanization happens when the new growing urban areas attracts the surrounding rural people to come, mostly in search of employment. In this region, we can also see how the gap between affluent people who live in a better environment comparing to those who are just came in that mostly living in the relatively substandard area. It is common that those area are located in the periphery that less facilitated. In this matter, the phenomenon of peripheralization is also could be examined. | 2019-04-27T13:09:12.676Z | 2018-02-01T00:00:00.000 | {
"year": 2018,
"sha1": "0ad79e7af0e26aaa620e812ca98bb305b5548904",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1088/1755-1315/123/1/012047",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "deb20640d3e460acae16e55629291d39f2da8796",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Physics",
"Geography"
]
} |
81028252 | pes2o/s2orc | v3-fos-license | Advances in the treatment of substance use disorder in Cyprus
Addiction psychiatry is a relatively new field in Cyprus. This paper presents the advances in the treatment of substance use disorders in the country in the past three decades. These advances have included increased availability of services, increased accessibility, the development of a modern biopsychosocial harm-reduction approach and evidence-based pharmacological treatments.
Cyprus is a member state of the European Union (EU) with a population of 855 000 inhabitants, comparably high living standards and a low unemployment rate. Despite its relative wealth, mental healthcare expenditure is considered low compared with the average in the EU. People with substance use disorders in Cyprus usually seek treatment in the public sector but can also be treated privately. The country has 11 psychiatrists per 100 000 people, compared with an average of 16 per 100 000 in the EU (European Commission, 2017). There are currently only two psychiatrists working full time in the department of addiction psychiatry in the public sector.
Heroin and other opioids
It is estimated there are up to 1200 high-risk opioid users in Cyprus. In 2016, 253 of them were undergoing opioid substitution treatment (OST). The average number of drug-induced deaths in Cyprus is eight per year, of which 90% are males who use opioids intravenously (European Monitoring Centre for Drugs and Drug Addiction, 2017). Causes of death include both illegal opioids and legal opioids diverted from medical to recreational use, as well as legally prescribed opioids for medical use.
Until 2007, there was no OST available in Cyprus. The only available treatment was within abstinence-based therapeutic communities. In August 2007 the first OST unit (named 'Gefyra' meaning 'The Bridge') opened in Nicosia, the capital city of Cyprus. The unit started with 13 patients in 2017, increased to 32 in 2011 and to 84 patients in 2017. A low-threshold approach was implemented in this harm-reduction intervention in recent years, especially after 2011, because an increasing body of evidence suggested there were generally better treatment outcomes for low treatment-threshold compared with high treatment-threshold designs (Kourounis et al, 2016). The OST program aimed to improve accessibility to treatment and to offer personalised treatment options regarding medication choice and dose titration, as well as flexibility of treatment duration. There is an emphasis on maintenance, harm reduction and retention of low adherence patients.
In recent years the number of OST units in the public sector has increased from one to five (one in each of the main cities of Cyprus). In addition, an OST service is now available in the prison setting. The main medication that is used in all OST programs is a combination of buprenorphine plus naloxone for short-term as well as for long-term/ maintenance treatment, following established guidelines (Taylor et al, 2015). Methadone is only used for short-term in-patient use until the patient is cross-titrated to a buprenorphinenaloxone combination, which increases safety. In the private sector, oxycodone and dihydrocodeine have also been used for OST.
Cannabis
In Cyprus, cannabis for personal recreational use is illegal. Medical cannabis is strictly controlled by the Ministry of Health and only a few people are approved each year to receive it, mainly in the form of cannabis oil. Currently, there is a draft law under discussion for the regulation of medical cannabis; it includes a provision for medical use in specific circumstances (House of Representatives, 2018). The prevalence of recreational cannabis use is stable, involving less than 5% of the population. Nevertheless, within that sub-population there has recently been a decrease in the use of herbal cannabis and an increase in synthetic cannabinoids.
Cannabis-induced psychoses are treated in out-patient psychiatric clinics in the public and private sectors. When accompanied by severe behavioural disturbance, patients are treated in a psychiatric hospital, usually under court-ordered, temporary, obligatory admission to hospital. Cannabis users who commence treatment on an out-patient voluntary basis are usually offered only counselling, with the addition of psychiatric care if they present with psychotic symptoms or with other psychiatric comorbidity.
Cocaine and other stimulants
In 2016, cocaine, 3,4-methylenedioxymethamphetamine (MDMA) and amphetamines were used only by 0.4, 0.3 and 0.1% of the population aged 15-34 years, respectively. There are no available data regarding stimulants classified as © The Author 2018. This is an Open Access article, distributed under the terms of the Creative Commons Attribution-NonCommercial-NoDerivatives licence (http://creativecommons. org/licenses/by-nc-nd/4.0/), which permits non-commercial re-use, distribution, and reproduction in any medium, provided the original work is unaltered and is properly cited. The written permission of Cambridge University Press must be obtained for commercial re-use or in order to create a derivative work. new psychoactive substances. Despite their relatively low popularity among young people, those taking recreational stimulants are receiving treatment disproportionally often; they constitute over 15% of people that are treated for a substance use disorder. It is notable that stimulants comprised three of the five most commonly seized classes of illicit drugs in Cyprus over the past year (European Monitoring Centre for Drugs and Drug Addiction, 2017).
The prevalence of both cocaine and amphetamine misuse in Cyprus peaked in 2009. There has been a consistent decrease each year since then. MDMA is currently rarely taken and its use has steadily decreased, at least since 2006. Despite this, a recent multi-city wastewater study found Limassol to be among the top ten European cities for crystal methamphetamine use, which is a relatively new drug in Cyprus (European Monitoring Centre for Drugs and Drug Addiction, 2018).
There is no dedicated program for the treatment of stimulant use disorder, but detoxification, rehabilitation and relapse prevention are offered in the general addiction psychiatry setting. Most of these patients also present with mental health comorbidities that are treated in an integrative care setting. The young stimulant misusers are referred to a special counselling centre for adolescents in the public sector.
Alcohol
The first drug and alcohol detoxification and rehabilitation centre, named 'THEMEA', was established in the public sector in 1991 as a division of the Psychiatric Clinic of the General Hospital of Nicosia. Originally, that unit only had four beds and admitted 'alcoholics' as well as people with illegal substance use disorder, who were known at that time as 'drug addicts'.
Alcohol is a very popular legal recreational substance in Cyprus, with the prevalence of alcohol use disorder just below the European average, at around 3%. There is an increasing prevalence of alcohol use among adolescents in Cyprus. A recent multi-centre study of 15-to 16-year-old school students found that almost all of them had taken recreational alcohol at some time and 70% reported alcohol use within the past 30 days, compared with a European average that is less than 50% at that age (European Monitoring Centre for Drugs and Drug Addiction, 2015).
At present, THEMEA is still the only drug and alcohol detoxification and rehabilitation centre in the public sector in Cyprus. It has been developed into a university clinic that offers full in-patient and out-patient detoxification, rehabilitation and relapse prevention services under the care of an experienced multidisciplinary team including addiction psychiatrists, clinical and counselling psychologists, specialist nurses, an occupational therapist and a social worker. Medical students of the three medical schools of Cyprus as well as residents in psychiatry are trained in this clinic during their rotation in the field of addiction psychiatry. The therapeutic program includes devising an individualised treatment plan. A biopsychosocial approach is used, which is divided into a short-term in-patient phase and a longerterm out-patient phase of treatment and relapse prevention. The treatment consists of a combination of pharmacological and non-pharmacological interventions, according to established guidelines (NICE, 2011). Non-pharmacological interventions include counselling, cognitive-behavioural therapy, mindfulness and behavioural techniques, which take place in an individual and/or group context. Since 2011 the therapeutic program is not entirely abstinence oriented; decreased/controlled alcohol consumption may now be a treatment goal for some patients, in line with published evidence Heather et al, 2010). Medications used in the treatment of alcohol and substance use disorders include naltrexone, nalmefene, disulfiram and baclofen.
Most patients have comorbiditiesespecially other substance use disorders, mood disorders or personality disorderswhich are usually treated by the same multidisciplinary team in an integrative care setting (Prodromou et al, 2014).
Tobacco
Smoking cigarettes is very common in Cyprus, with 25% the population (37% of men and 14% of women) aged 15 and above being daily smokers. To help those tobacco-dependent people who want to stop smoking, public mental health services currently offer a structured smoking cessation program consisting of nicotine replacement treatment with patches along with a 3-month counselling intervention (Ministry of Health, 2018).
Discussion
Cyprus' psychiatric community follows scientific evidence and international medical guidelines in the treatment of mental health disorders. Substance use disorders are considered to be chronic remitting and relapsing mental health disorders, as classified in DSM-5 (2013) and the forthcoming ICD-11. Treatment of legal or illegal substance use disorders is offered on a voluntary basis, in the public sector or privately. Despite this, drug possession for personal use is regarded by the law as a serious criminal offence. It is punishable by up to 12 years in prison for class-A drugs (opioids and cocaine). Recently, new legislation allows young drug users who are arrested for the first time to opt for treatment instead of prosecution and imprisonment.
As substance use disorders are usually chronic conditions, patients need long-term care using an evidence-based multidisciplinary biopsychosocial approach. The health services in Cyprus are changing to implement recent laws regarding the economic and administrative autonomy of public hospitals and an emerging general health system. Modern health policies, integrated with medical research achievements and clinical guidelines, will play an important role in advancing further the treatment of substance use disorder. | 2018-12-24T12:52:37.555Z | 2018-06-18T00:00:00.000 | {
"year": 2018,
"sha1": "d8f571945b7acc2917c3f1c24ffc2c429770b604",
"oa_license": "CCBYNCND",
"oa_url": "https://www.cambridge.org/core/services/aop-cambridge-core/content/view/41B3042C2C0BE1B8A923314C873654CB/S2056474018000168a.pdf/div-class-title-advances-in-the-treatment-of-substance-use-disorder-in-cyprus-div.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "d8f571945b7acc2917c3f1c24ffc2c429770b604",
"s2fieldsofstudy": [
"Psychology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
254525416 | pes2o/s2orc | v3-fos-license | Primary care physicians’ perceptions of social determinants of health recommendations: a qualitative study
Background Several organisations have called for primary care professionals to address social determinants of health (SDoH) in clinical settings. For primary care physicians to fulfill their community health responsibilities, the implications of the SDoH recommendations need to be clarified. Aim To describe primary care physicians’ views about being asked to address SDoH in clinical settings, from both positive and negative perspectives. Design & setting A qualitative study in Japan. Twenty-one physicians were purposively recruited. Method ‘Love and breakup letter’ methodology was used to collect qualitative data that contained both positive and negative feelings. Participants wrote love and breakup letters about being asked to address SDoH in a clinical setting, then undertook an in-depth online interview. Data were analysed via thematic analysis using the framework approach. Results The following themes were identified: (i) primary care physicians take pride in being expected to address SDoH; (ii) primary care physicians rely on the recommendations as a partner, even in difficult situations; (iii) primary care physicians consider the recommendations to be bothersome, with unreasonable demands and challenges, especially when supportive surroundings are lacking; and (iv) primary care physicians reconstruct the recommendations on the basis of their experience. Conclusion Primary care physicians felt both sympathy and antipathy towards recommendations asking them to address SDoH in their clinical practice. The recommendations were not followed literally, instead contributing to physicians’ clinical mindlines. Professional organisations that plan to develop and publish recommendations about SDoH should consider how their recommendations might be perceived by their target audience.
Introduction
SDoH are non-medical factors that influence health outcomes. 1 It is estimated that more than half of health outcomes are determined by socioeconomic or behavioural factors. 2 Primary care describes itself as the foundation of the healthcare system, 3 and should play a role in addressing health inequity by recognising patients' socioeconomic background, identifying marginalised populations, and delivering high quality preventive care and chronic disease management. 4 Considering the reality that the distribution of primary care resources is unequal, [5][6][7][8][9][10][11] addressing patients' social conditions in primary care settings should be prioritised as an urgent issue.
Healthcare professionals' attitudes and beliefs regarding patients contribute to inequalities in health care. 11 Social justice is one of the moral foundations of primary care, 12 and interventions in SDoH in a clinical setting are theoretically considered to contribute to better health outcomes, better healthcare delivery, and cost savings. 3,13 Consequently, several organisations have called for medical professionals, including primary care professionals, to address SDoH in the medical healthcare system. [14][15][16][17][18][19] However, primary care physicians may experience confusion in assuming responsibility for their patients' social determinants, for several reasons. 20 First, dealing with patients with social difficulties can be stressful and depressing. [21][22][23] Second, newly detected patient social needs could lead to excessive medicalisation and impose additional work on busy professionals. 24,25 Third, dealing with SDoH is rarely associated with financial rewards for primary care professionals, 24 except for some innovative approaches. 26,27 Fourth, there is little current evidence that clinicians can play effective roles in SDoH in primary care practice, 28,29 particularly in small practice settings. 21 In addition, comprehensive evidence-based recommendations about how to address SDoH have not yet been published. 30 Given the complexity of this situation, primary care physicians sometimes lack the confidence to address social needs, and are afraid of contributing to poor health outcomes. [31][32][33] Involvement in SDoH in the absence of solid and effective methods may raise the fear of unforeseen problems. 34 Given these problems regarding SDoH in primary care settings, expecting primary care physicians to deal with SDoH may lead to further confusion and exhaustion, which may hinder the provision of high quality primary care to patients.
It remains unclear how primary care physicians feel about being required to take action on SDoH. To ensure that SDoH-related recommendations encourage primary care physicians to fulfil their role in relation to health inequity, the implications of the recommendations need to be clarified. The study aims to describe the perspectives of primary care physicians when asked to address SDoH.
Method Setting
This qualitative study was conducted in Japan, and followed the Standards for Reporting Qualitative Research (SRQR). 35 Recruitment, interviews, and discussion in the data analysis in this study were conducted online because of the COVID-19 pandemic.
In Japan, primary care is delivered both in the community and in hospital settings. Primary and secondary care are not always distinguished clearly. 36 The distinction between family physicians, hospital family physicians, and hospitalists (engaging in both inpatient and outpatient care) is not clear in Japan, and they are sometimes collectively referred to as general medicine physicians. 37 Most of them engage in primary care, and many subspecialists also play a role in primary care. 36 The Japan Primary Care Association encourages each member to take action on daily practice including (i) prevention, (ii) education, (iii) research, (iv) partnership, and (v) advocacy, to eliminate unjust health inequities. 19 As of 2022, this is the only recommendation published by official medical professional organisations in Japan.
Reflexivity
The first author is a primary care physician and PhD student majoring in medical education. The first author is a member of the Japan Primary Care Association Commission on Social Determinants of Health, which published a recommendation about SDoH. 19 The second author is a primary care physician and researcher in medical education. The third, fourth, and fifth authors are experts in medical education.
As a researcher standpoint, a social constructivism epistemology was adopted. Constructivists recognise that 'individuals construct different understandings based on their past experiences and knowledge'. 38 Social constructivism is a theory that learning is structured by the dynamic interaction between individuals and the environment, including other people, objects, and activities that occur there. This dynamism is seen in learners' participation in the actual practice, especially when they are faced with conflicting ideas. 38,39 The theory says that knowledge is a construction of the individual, and the learner participates in the learning process in an active way. 39 The authors of the current study believe that participatory primary care physicians construct their own understandings of SDoH recommendations based on their experiences and clinical settings.
Participants
Participants were recruited purposively to maintain diversity in years of experience, self-reported gender, and practice setting. Recruitment included direct request from researchers, notices in social network services, and recommendations from participants. All participants were primary care physicians, general medicine physicians, or residents in primary care. All participants were familiar with the concept of SDoH. Physicians that were involved in producing official recommendations regarding SDoH were excluded. Considering previous studies, an initial goal of recruiting 20 participants was set. 40,41 Data collection To obtain multifaceted insights into participants' ideas and feelings, love and breakup letter methodology was used. 42 In this methodology, participants are asked to write love and breakup letters to an item or topic under discussion. These letters are used as triggers for subsequent interviews. The authors were concerned that opinions about being asked to address SDoH would be biased towards favourable responses because addressing SDoH is generally viewed as politically and ethically correct for primary care physicians. The love and breakup methodology has the potential to reveal both positive and negative feelings towards a topic. 43 This methodology emerged from research on user experience, 42 and was used to stimulate various thoughts and ideas that primary care physicians have when receiving recommendations about SDoH.
Data collection was performed according to the following four steps. First, participants voluntarily submitted written consent for research participation and completed a demographic data form. Second, participants read two recommendations 15,19 that included recommendations for primary care physicians and family physicians to address SDoH in their daily practice. These two recommendations are the only SDoH-related recommendations published by the representative associations of primary care or family medicine, and were written in or officially translated into Japanese. Third, participants were given an explanation about the love and breakup letter method, and asked to write letters to a person who officially asks primary care physicians to address SDoH in their daily practice. To encourage participants to express their ideas and feelings freely, no further requirements were given about content, length, or wording. Some participants reported that it took about an hour or less to write these letters, and others reported that they 'racked their brains' for a few days. Fourth, participants were interviewed about their letters by the first author. The interviewer read these letters carefully before each interview, and asked participants about the meaning of their letters in detail and their feelings and thoughts in writing the letters. Table 1 shows examples of the love and breakup letters.
Data analysis
Every interview was recorded and transcribed verbatim. Anonymised transcripts and letters were analysed via thematic analysis using the framework approach. 44 The analysis contained the following seven steps: verbatim transcription; familiarisation with the whole interview; initial coding; developing a working analytical framework; applying the framework to the whole data again; summarising data into the framework; and interpreting the data. Data analysis was conducted partly in parallel with data collection, and participant recruitment was completed after confirming that no additional theme emerged. 45 The first and second authors coded the data, discussed it iteratively, and collapsed their analyses through the whole procedure. The other authors examined the analysis and revised the coding. All authors discussed the results iteratively and reached a consensus. Finally, all participants read the analysis and revised it if necessary.
Results
A total of 21 participants were recruited, of which 38% were self-reported women. The median age was 40 years (range: 28-55 years) and the median duration of clinical experience was 10 years (range: 3-31 years). Table 2 shows demographic data.
The following four themes were identified from the qualitative analysis: (i) primary care physicians take pride in being expected to address SDoH; (ii) primary care physicians rely on the recommendations as a partner even in difficult situations; (iii) primary care physicians consider the recommendations to be bothersome, with unreasonable demands and challenges, especially when supportive surroundings are lacking; (iv) primary care physicians reconstruct the recommendations on the basis of their experience. Table 3 shows a summary of these themes and sub-themes.
Primary care physicians take pride in being expected to address SDoH Participants believed that they were in a unique position to address SDoH and they were proud to be relied on. They also recognised that addressing SDoH would enhance the quality of their practices.
Integrability with primary care
Participants reported that addressing SDoH was an essential component of primary care, and that it was a matter of course to be asked: Participants considered primary care physicians as being in the best position to address SDoH because of their accessibility and comprehensiveness: 'Primary care physicians have more opportunities to encounter patients with social and financial difficulties than subspecialists. Needless to say, we should be professional when seeing these patients.' (3 years; male; resident; love letter)
Excellence in primary care
Participants believed that addressing SDoH allowed them to manage complex cases more robustly, and was thus a part of being an excellent primary care physician: Experienced physicians perceived the recommendations as enhancing the value of their practice. Novices expressed admiration for the recommendations, and perceived them as being 'cool': Primary care physicians rely on the recommendations as a partner even in difficult situations Participants favoured SDoH recommendations from the following two perspectives: authorities to validate their practices; and strongholds in times of hardship.
Authoritative supporter
Participants were grateful that the recommendations guaranteed the legitimacy of their commitment to SDoH in daily practice: 'Whatever others say, you make me feel confident.' (9 years; male; academic hospital; love letter)
'I often wondered if addressing SDoH was just meddling. It [the recommendation] tells me that addressing SDoH is a meaningful initiative for patients and communities, in an evidence-based manner. Thus, I feel more confident.' (15 years; male; community hospital; interview)
Participants also perceived that the recommendations verbalised and acknowledged the frustration and hesitation that they felt in their workplace:
Encouraging friend
The participants thought of the recommendations as a friend who pushed them to do the right thing regarding SDoH, even in the hardest of times: 'There are many things I cannot do on my own regarding SDoH, but you keep me motivated.' (5 years; male; resident; love letter) Mizumoto Primary care physicians consider the recommendations to be bothersome, with unreasonable challenges and demands, especially when supportive surroundings are lacking Participants disfavoured SDoH recommendations as a nuisance to impose on excessive burdens, especially in unsupportive practice surroundings. They reported the following three negative consequences: disregarding the importance of SDoH; feeling guilty; and underestimating their skills.
Excessive burden
Participants felt that the recommendations asked too much and that they would be overwhelmed by time-consuming and emotionally draining burdens. The wide-ranging scope of the recommendations contributed to the sense of being overwhelmed: 'If I did everything you said, the work would never get done.' (
Antipathy driven by unsupportive surroundings
Participants found the recommendations more bothersome when they confronted environments that were not suitable for the recommendations to be implemented: In contrast, if participants worked in a cooperative environment and they had peers or colleagues to address SDoH with, they were more likely to perceive the recommendations positively, even in the midst of busy clinical practice:
'It matters whether my colleagues are looking in the same direction and consider SDoH to be important. Without anyone else who acknowledges the importance of SDoH, I would feel very lonely, and I feel negative about these recommendations. […]
On the contrary, with colleagues who share the same vision regarding SDoH, I would feel very positive, even if the statements recommend something I can't do right now.' (18 years; male; academic hospital; interview) Primary care physicians reconstruct the recommendations on the basis of their experience Participants did not recognise that they should follow everything SDoH recommendations said. Rather, they considered SDoH recommendations as a trigger of multi-layered learning and practice.
Not following the recommendations literally
Participants recognised that reading the recommendations alone was not enough to change their practices. None of the participants reported that they followed the recommendations literally: Participants reported that realising the importance of SDoH in the real world was also important for an effective education: 'Educational opportunities should be provided to increase physicians' understanding that a lot of patients have socially complex backgrounds. Many physicians are still unaware that they do see such patients.' (10 years; female; academic hospital; interview)
Discussion Summary
The study explored primary care physicians' views regarding recommendations that asked them to address SDoH in their clinical practices. The love and breakup letter methodology revealed ambiguous feelings and thoughts about the statements. Participants were proud of themselves as professionals to be asked to address SDoH and considered the recommendations to be helpful and supportive. Conversely, participants also thought of the recommendations as irritating and nagging, especially in the absence of peers with shared views regarding the importance of SDoH. Participants did not follow the recommendations literally, and they required reflective learning and practice to understand and educate themselves regarding SDoH in their clinical settings.
Strengths and limitations
To the best of the authors' knowledge, this study is the first to examine how primary care physicians view recommendations about SDoH. These recommendations aim to reduce health inequity by changing the attitudes and behaviours of primary care professionals. The way in which such recommendations are received by primary care physicians is thus a matter of great concern. This study gathered negative as well as positive opinions. This methodology was not designed to dismiss recommendations or to cynically criticise efforts to address SDoH. Rather, the study revealed how the recommendations could be incorporated into education and practice regarding SDoH in clinical settings.
This study involved several limitations. Importantly, one participant reported severe distress when writing the breakup letter. The participant worked with a socially marginalised population and perceived writing a breakup letter as denying their own dedication. The participant could not fully express their ideas in the letter. Previous studies mentioned that some participants are uncomfortable and embarrassed to write and read their letters. In addition, researchers must be aware that the love and breakup letter methodology sometimes induces invasive emotional responses in participants. Instead of love and breakup letters, researchers can use 'fan' and 'admonition' letters, thereby maintaining the benefits of the methodology while avoiding unnecessary emotional disturbance.
In addition, the physicians who voluntarily participated in this research might have been those who had an interest in and a positive attitude towards SDoH. In particular, participatory residents, which represented one-third of all participants, might have a high affinity for SDoH because they were all under the Japan Primary Care Association family medicine expert training programme, which requires residents to address SDoH and submit a report. Thus, the findings might not reflect the opinions of primary care physicians who have little interest in SDoH or disagree with the commitment to addressing SDoH. However, this limitation was partially resolved by collecting negative views on the topic via the breakup letter.
A context of primary care in Japan should be mentioned to contextualise the findings. Although Japan has well-organised healthcare and social security systems, socioeconomic and health inequities still exist. 36,46 In addition, physicians in Japan, especially residents, are chronically exposed to long working hours. 47 Clinics and small-sized community hospitals in Japan are reimbursed under a fee-forservice model. 36 This implies that most of SDoH-related clinical practice do not pay.
Comparison with existing literature
Although many physicians believe in the importance of working to address patients' social needs, few physicians are able to incorporate this approach into their practice. 32 Primary care physicians recognised that the major disincentives to working on SDoH were a lack of time, staffing, and resources. 32,48,49 These disincentives can promote commoditisation, commercialisation, and fragmentation of primary care, leading to inequalities in health care. 50 In the current study, these difficulties were associated with negative opinions of the recommendations.
This study also indicated that, even if physicians felt burdened, supportive work conditions and cooperative team members were related to positive attitudes towards recommendations about SDoH. This relationship may have occurred because sharing tasks and responsibilities with members mitigated participants' fears about lacking skills and resources, and helped them feel able to address SDoH in clinical settings. 26 The ability to respond appropriately to patient social needs may thus reduce these mental stresses and improve self-efficacy. 51 In addition, SDoH recommendations gave participants, especially younger ones, a sense of honour and dignity as primary care physicians. Primary care physicians tend to be unduly evaluated for their skills and roles. 52 In Japan, there had been no official primary care training until recently, 53 and unreasonable criticism from specialists may often reduce motivation to be a primary care physician. 54 This context may partly explain why younger participants focus on their identity formation.
Being aware of unmet social needs in clinical practice might lead to further understanding of SDoH. Physicians can go beyond power inequalities between patients and physicians and bring patients' social contexts into everyday encounters. 55,56 Primary care physicians working in areas of lower socioeconomic conditions have more positive attitudes regarding their patients' social problems. 48 Physicians' attitudes towards patients with social difficulties may be improved through changes in medical education, 57 and reflective learning and practice about SDoH may play an important role in residents' development. 58 Participants did not literally implement the recommendations. Instead, they regarded the recommendations as encouraging and supportive, with positive implications for their clinical practice and further advancement of their existing efforts related to SDoH. The recommendations may not function as a norm to follow, but rather to support each physician to form their own clinical mindlines, or 'internalised and collectively reinforced tacit guidelines'. 59,60 Clinical mindlines are formed on the basis of various learning sources, reflection, and interactions with peers and colleagues, and this was also indicated in the current research.
Implications for research and practice
Professional organisations that plan to develop and publish recommendations about SDoH should consider how their recommendations might be perceived by their target audience. By providing an opportunity to learn and discuss SDoH, their recommendations could help to change clinical practices more efficiently. For clinical supervisors, the current findings might provide useful tips about educating SDoH. Merely describing theoretical aspects of SDoH may not motivate trainees to change their attitudes and behaviours. Familiarising physicians with social determinants in a clinical setting and reflecting trainees' experience may play a key role in postgraduate training. The COVID-19 pandemic has exacerbated health inequities. 61,62 Primary care physicians potentially cope with the pandemic according to patients' social contexts. 63 In the COVID-19 era, addressing SDoH in primary care should be promoted further.
Future research is needed to determine whether recommendations regarding SDoH and subsequent efforts of medical professionals can improve patient outcomes. In addition, future studies should elucidate the association between physicians' working circumstances and attitudes towards such recommendations.
Funding
The authors did not receive a specific grant for this research from any funding agency in the public, commercial, or not-for-profit sectors.
Ethical approval
This study was approved by the Research Ethics Committee of the University of Tokyo Graduate School of Medicine and Faculty of Medicine (reference number: 2021193NII).
Provenance
Freely submitted; externally peer reviewed.
Data
The dataset is not publicly available. | 2022-12-11T16:12:04.826Z | 2022-12-09T00:00:00.000 | {
"year": 2023,
"sha1": "2fd80f1917fe4c174e109ad6807df32f84034662",
"oa_license": "CCBY",
"oa_url": "https://bjgpopen.org/content/bjgpoa/early/2023/01/23/BJGPO.2022.0129.full.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5b4b7dd679440cfec1a2ee8f26a757eeb9912b22",
"s2fieldsofstudy": [
"Medicine",
"Sociology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
697731 | pes2o/s2orc | v3-fos-license | Interference of Oxidative Metabolism in Citrus by Xanthomonas citri pv citri
Citrus are one of the most important fruit crops grown worldwide. Among the pathogens that cause disease of Citrus sp. and closely related genera, Xanthomonas citri pv citri (Xcc) causes citrus canker, a devastating disease that is found in 30 countries worldwide and has caused significant economic loss (Del Campo et al., 2009; Rigano et al., 2010). The principle mode of transmission of Xcc is through heavy rain and high wind events and thus the disease is most severe in regions that experience occasional tropical storms and hurricanes (Graham et al., 2004). Citrus canker outbreaks in Florida, for example, have contributed to a decline in acreage of grapefruit to 61 % by 2009 compared to the acreage in 1994 (Anonymous, 2009). Severe canker can cause fruit drop and even tree death (Graham et al., 2004). Further economic losses can be incurred through restricted movement of infected fruits especially to citrus growing regions where canker is not found (Schubert et al., 2001).
Introduction
Citrus are one of the most important fruit crops grown worldwide.Among the pathogens that cause disease of Citrus sp. and closely related genera, Xanthomonas citri pv citri (Xcc) causes citrus canker, a devastating disease that is found in 30 countries worldwide and has caused significant economic loss (Del Campo et al., 2009;Rigano et al., 2010).The principle mode of transmission of Xcc is through heavy rain and high wind events and thus the disease is most severe in regions that experience occasional tropical storms and hurricanes (Graham et al., 2004).Citrus canker outbreaks in Florida, for example, have contributed to a decline in acreage of grapefruit to 61 % by 2009 compared to the acreage in 1994 (Anonymous, 2009).Severe canker can cause fruit drop and even tree death (Graham et al., 2004).Further economic losses can be incurred through restricted movement of infected fruits especially to citrus growing regions where canker is not found (Schubert et al., 2001).
The commercial and dietary importance of citrus and the severity of canker have led to extensive research to identify resistant genotypes that would serve as models of study as well as germplasm for crop improvement.Most commercial citrus are within the Citrus genus, however closely related genera are capable of hybridizing with Citrus sp. and thus have been included in studies to evaluate variation in plant defense to canker.Citrus genotypes can be classified into four broad classes based on susceptibility to canker (Gottwald, 2002).The most highly-susceptible commercial genotypes are 'Key' lime [C.aurantifolia (Christm.)Swingle], grapefruit (C.paradisi Macfad.),lemon (C.limon), and pointed-leaf Hystrix (C.hystrix).Susceptible genotypes include limes (C.latifolia), sweet oranges (C.sinensis), trifoliate orange (P.trifoliata) citranges and citrumelos (P.trifoliata hybrids), and bitter oranges (C.aurantium).Resistant genotypes include citron (C.medica L.) and mandarins (C.reticulata Blanco).Highly resistant genotypes include Calamondin [Citrus margarita (Lour.)]and kumquat [Fortunella margarita (Lour.)Swingle].The high degree of resistance to Asiatic citrus canker by calamondin, kumquat, and Ichang papeda (C.ichangenesis) has been noted in the field (Reddy, 1997;Viloria et al., 2004).events in the pathogenesis of Xcc in citrus has been described (Burnings and Gabriel, 2003).Following artificial inoculation, the bacterial cells occupy intercellular spaces and begin to divide by the end of the first day after inoculation.Once a critical population threshold is reached, which is about 1 x 10 3 to 1 x 10 4 bacteria per canker lesion, a quorum sensing mechanism (da Silva et al., 2002) is likely the impetus that turns on pathogenicity factors (Bassler, 1999) that includes Rpf encoding genes (Slater et al., 2000).Within 2 days after inoculation, Xcc attaches to plant cell walls via specialized proteins called "adhesins" (Lee and Schneewind, 2001) by hrp (hypersensitivity response and pathogenicity) pili or by type IV pili as observed during xanthomonas pv.malvaceraum-Gossypium hirsutum interaction (Burnings and Gabriel, 2003).Once attached, Xcc uses it T3S system to turn on additional pathogenicity genes (Pettersson et al., 1996) and inject pathogenicity factors into the cell including Avr, Pop and Pth proteins such as PthA (Brunings and Gabriel, 2003).PthA presumably stimulates plant cell division and enlargement within 3 days after inoculation that reaches a maximum by 7 days after inoculation (Lawson et al., 1989).Cell enlargement, presence of the bacteria in the apoplast, and its production of hydrophilic polymers causes watersoaking symptoms starting 4 days after inoculation (Duan et al., 1999).The maximum bacterial populations occur at 7 days after inoculation (Khalaf et al., 2007) and about 8 days after inoculation the epidermis ruptures allowing bacteria to egress to the surface (Brunings and Gabriel, 2003).By 10-14 days after inoculation, necrosis develops in the infected areas (Duan et al., 1999) and by 21 days after inoculation leaves abscise (Khalaf et al., 2007).
Oxidative response of plants to pathogens
The hypersensitive response (HR) involves a rapid, widespread change in plant cell metabolism intended to alter the chemistry of the region within and surrounding the infected area in order to impact the pathogen by deterring its metabolism, isolating it within the infected region, and even directly killing it (Lamb and Dixon, 1997).As part of the response, programmed cell death (PCD) of plant cells within and adjacent to the infected region is often elicited (Lamb and Dixon, 1997).The HR includes alteration of oxidative metabolism to produce reactive oxygen species (ROS) that promote PCD, sicken pathogen metabolism, and promote changes in cell wall chemistry that isolate the pathogen (Azvedo et al., 2008;Kuzniak and Urbanek, 2000;Lamb and Dixon, 1997).In the case of citrus canker, PCD is evident around infection sites by chlorosis, with the chlorotic rings widening as the canker spreads radially from the infection point and along the plane of the leaf blade (Burnings and Gabriel, 2003).
Reactive oxygen species produced during HR and PCD in response to pathogens include superoxide radicals (O2˙-), hydrogen peroxide (H 2 O 2 ), and hydroxyl radicals (OH˙) (Chen et al., 2008;Lamb and Dixon, 1997;Wojtaszek, 1997).Production of ROS occur during normal metabolism of uninfected plants and maintained at low concentrations by several enzymatic and non-enzymatic pathways.In response to infection by pathogens, concentrations of ROS are increased and compartmentalized during HR and PCD via several pathways mediated by signals including salicylic acid, nitrous oxide, and the MAP kinase cascade mechanism (Durrant and Dong, 2004;Vlot et al., 2009) to alter the chemistry within and surrounding the infection site (Mittler, 2002).
One important ROS is H 2 O 2 , the concentration of which has been correlated with disease resistance (Lamb and Dixon, 1997;Mittler et al., 1999).H 2 O 2 concentrations can increase very rapidly from 0 to 6 days after inoculation during plant-bacterial pathogen interactions (Wojtaszek, 1997) (Gay and Tuzun, 2000).
Based on their metal co-factor, SODs can be classified into three categories: iron SOD (Fe-SOD), manganese-SOD (Mn-SOD), and copper-zinc SOD (Cu-Zn-SOD), each of which is specifically compartmentalized in the cell (Alscher et al., 2002).Fe-SOD is located in the chloroplasts, Mn-SODs in the mitochondria and peroxisomes, and Cu-Zn-SOD in the chloroplast, cytosol, and possibly in the apoplast (Alscher et al., 2002).The various SODs play important roles in plant/pathogen interactions.Fe-SOD, for example, appears to be involved in the early signaling with H 2 O 2 by plant cells after infection (Mur et al., 2008;Zurbriggen et al, 2009).Mn-SOD has been reported to play an important role in early apoptotic events during PCD in Gossypium hirsutum-Xanthomonas campestris pv.malvecearum interaction (Voludakis et al., 2006).However, Kukavica et al. (2009) showed the existence of a cell wall bound Mn-SOD that generated OH . in pea roots and probably facilitates cell elongation.
Comparative analysis of oxidative metabolism in Xcc resistant and susceptible genotypes
Recent studies on various Citrus sp. and closely related genera have increased our understanding of deficiencies in oxidative metabolism in susceptible genotypes.The most commonly studied resistant genotype is kumquat (Fortunella margarita (Lour.)Swingle).The kumquats have been characterized as canker resistant based on fewer canker lesions per leaf and reduced internal bacterial populations per lesion compared to susceptible genotypes (Khalaf et al., 2007;Viloria et al., 2004).Resistance of kumquat has been exhibited in hybrids with Citrus sp.such as 'Lakeland' limequat, a cross between the highly Xcc-susceptible 'Key' lime and kumquat, which has demonstrated greater canker resistance than 'Key' lime alone under field conditions (Viloria et al., 2004).Furthermore, the Asiatic strain of canker (Canker A) has been shown to reach populations densities consistent with a compatible reaction (Stall et al., 1980) and the lower concentrations of Xcc in kumquat indicates a disease resistance mechanism (Viloria et al., 2004).Although oxidative metabolism is complex, recent research has focused on comparing kumquat resistant and susceptible Citrus genotypes on their H 2 O 2 metabolism in part due to its importance in cell signaling and its involvement in cell wall chemistry during growth and plant defense.
The basal antioxidant metabolism has been shown to vary in different citrus genotypes (Kumar et al., 2001a) which relate to their fundamental differences in resistance.Kumquat, for example, was shown to have higher total SOD activity in kumquat than grapefruit and sweet orange, yet H 2 O 2 was lower in kumquat in part because of higher CAT activity.These fundamental differences in basal metabolism are the starting point for changes in oxidative metabolism when challenged with Xcc.
Oxidative metabolism in canker-resistant kumquat
Using an Asiatic strain of canker (Canker A) and infiltration of kumquat leaves, Kumar et al., (2011c) showed that the Xcc populations peaked 4 days after inoculation and declined thereafter.Chlorosis was evident the first day after inoculation and persisted throughout the infection process (Fig. 1).Water soaking was delayed until 4 days after inoculation.H 2 O 2 concentrations increased rapidly 1 day after inoculation to almost 2x the controls, about 10 ml, from 6 to 8 days after inoculation and declined slightly thereafter but remained above the controls throughout the infection process (Figs. 1 and 2).The pattern of Xcc population and H 2 O 2 concentrations is consistent with the latter's role in impeding bacterial growth and promoting PCD, which occurred from 10 to 12 days after inoculation.The rapid necrosis in the localized region of the infected kumquat tissue by Xcc has been suggested to be consistent with a hypersensitive response (HR) and induced PCD (Khalaf et al., 2007).Lipid peroxidation was shown to increase rapidly and remain several times higher than the controls in kumquat-Xcc interaction (Kumar et Keeping in mind that total SOD activity in kumquat-Xcc interaction increased and remained high throughout pathogenesis, the decline in Fe-SOD activity beyond the first day after inoculation had to be replaced by a different form of SOD that would dominate during the second peak of total SOD activity.Kumar et al., (2011e) found that Mn-SOD activity increased from 2x to 3x that of the control starting 2 days after inoculation and reached a maximum during the second peak of total SOD activity from 6 to 8 days after inoculation.The prolonged, elevated Mn-SOD activity indicated that this class of SOD was responsible for the majority of total SOD activity throughout the entire pathogenesis process.Mn-SOD is generally considered to be limited to mitochondria and peroxisomes (Alscher et .One SOD reported to be located in plant apoplasts is Cu-Fe-SOD (Alscher et al., 2002) and in kumquat infected with Xcc, a putative Cu-Fe-SOD gene was up-regulated 2 to 7 days after inoculation (Khalaf et al., 2007), however activity of this SOD isoform was not detected (Kumar et al., 2011e).Mn-SOD was also suggested to be involved in cell elongation (Kukavica et al., 2009), which is one of the early events during canker development (Khalaf et al., 2007).Kukavica et al. (2009) proposed a novel role for cell wall bound Mn-SOD that assists in POD-mediated cell elongation by producing OH . in the apoplast.Although the formation of OH .during kumquat-Xcc is not verified, its formation is consistent with plant defense considering its high toxicity to Xanthomonas spp.(Vattanaviboon and Mongkolsuk, 1998).Nevertheless, production of O 2 .
-and conversion of it plus H 2 O 2 to OH . in kumquat-Xcc interactions needs to be determined.
In summary, kumquat respond to Xcc by promoting higher concentrations of H 2 O 2 through temporal and qualitative changes in enzymes involved in its synthesis and dismutation.H 2 O 2 is produced initially through increased chloroplastic SOD 1 day after inoculation and thereafter through increased mitochondrial and peroxisomal SOD activity.Elevated symplastic H 2 O 2 concentrations are maintained by declining APOD and later CAT activity.We propose that the elevated concentration of H 2 O 2 diffuses from the symplast to the apoplast where it directly inhibits bacterial metabolism and utilized by POD.The higher POD activity presumably utilizes H 2 O 2 to cross-link cell walls and perhaps produce highly toxic OH . .
Oxidative metabolism in canker susceptible grapefruit and sweet orange
Using the same strain of Asiatic canker, infiltration method, and under the same growing conditions as in kumquat (Kumar et al., 2011c,e), the bacterial population in grapefruit and sweet orange leaves grew to 1 x 10 9 CFU/cm 2 (Kumar et al., 2011b,d), which was 10x that of kumquat (Kumar et al., 2011e).In general, the responses of grapefruit and sweet orange to Xcc were similar.Whereas the Xcc population peaked in kumquat 4 days after inoculation, the population peak occurred 8 days after inoculation in grapefruit (Figs. 1 and 3) and 14 days after inoculation in sweet orange.Chlorosis was evident in grapefruit and sweet orange by the first day after inoculation as in kumquat.However water soaking, which didn't occur until 4 days after inoculation in kumquat, occurred by the second day in grapefruit and sweet orange.Furthermore, swelling of the leaves in the inoculated region was evident starting 6 days after inoculation.Necrosis was evident from 16 to leaf abscission, which occurred a week later than kumquat.
Unlike H 2 O 2 concentrations in kumquat that increased and remained high until Xcc populations declined, H 2 O 2 concentrations in grapefruit and sweet orange leaves demonstrated a biphasic pattern.There was an initial surge in H 2 O 2 concentration in both susceptible genotypes to that found in kumquat except it was only to 1/3 the concentration and the surge only lasted until 4 days after inoculation (Kumar et al., 2011b,d).H 2 O 2 concentrations declined to or below the controls and then surged a second time but only to the same concentrations or to concentrations slightly above the controls from 12-14 days after inoculation.The crash in H 2 O 2 concentration occurred very late in the log phase of bacterial growth, the stage most susceptible to H 2 O 2 (Tondo et al., 2010), which allowed extension of that phase resulting in the higher bacterial populations compared to kumquat.
The disturbance in H 2 O 2 concentration was related to temporal and qualitative changes in enzyme activities related to H 2 O 2 metabolism.Total SOD activity in grapefruit and sweet orange generally followed that of H 2 O 2 concentration with a peak in activity occurring 4 days after inoculation followed by a rapid decline with concentrations similar to or less than the controls for the rest of the infection process (Kumar et al., 2011b,d).The initial increase in total SOD activity was due to a surge in Fe-SOD activity similar to that of kumquat.Three Fe-SOD isoforms were detected in both infected and control leaves of grapefruit, but it was Fe-SOD 2 that contributed most of the Fe-SOD activity observed.Down regulation of Fe-Sod1transcription were observed in Botrytis cinerea infected cultured cells of Pinus pinaster (Azevedo et al., 2008), but whether this gene is involved in Xcc-susceptible citrus genotypes is unknown.
Manganese superoxide dismutase activity surged in a manner similar to kumquat but then crashed to concentrations similar to the controls by 4 days after inoculation (Kumar et al., 2011b,d).Thus the decline in H 2 O 2 concentration in grapefruit and sweet orange was due in part to suppression of Mn-SOD activity.Four Mn-SOD isoforms were observed in grapefruit (Kumar et al., 2011d).Mn-SOD 3 was constitutively active however Mn-SOD 1 and 2 were higher from 2 and 4 days after inoculation but thereafter gradually disappeared.It appears then that the appearance of Mn-SOD 1 and 2 are originally promoted in response to Xcc infection, but response dissipates later in the infection process.A weakly stained Mn-SOD 4 was observed at 10 days after inoculation and appeared to be a last attempt by the host to generate more H 2 O 2 to suppress Xcc or as part of PCD in the infected zone (Vattanaviboon and Mongkolsuk, 1998).
In addition to changes in activities of the various SODs, H 2 O 2 degrading enzymes also demonstrated temporal and qualitative changes in activity (Kumar et al., 2011b,d).Catalase activity increased above the control in grapefruit starting 2 days after inoculation and remained up the control peaking 16 days after inoculation, which is opposite of kumquat where CAT activity was suppressed (Kumar et al., 2011b).Four CAT isoforms were detected in controls and six in Xcc-infected grapefruit, with CAT 4 and 5 novel in the latter plants and the intensity of the CAT 2 and 4 bands very high compared to the controls.Higher expression of CAT 2 mRNA in roots of potato was found during pathogenesis of Corynebacterium sepedonicum NCPPB 2137 and Erwinia cartovora spp.cartovora NCPPB 312 and provide the first evidence that class II CAT isoforms are also pathogen induced (Niebel et al., 1995).Thus the elevated CAT activity in grapefruit partially explains the decline in H 2 O 2 concentrations in grapefruit.
Unlike kumquat where APOD activity was suppressed in Xcc-infected plants, APOD activity in grapefruit increased 4 days after inoculation and remained higher than the controls up to 16 days after inoculation (Kumar et al., 2011b).Like CAT, the higher APOD activity contributed to the lower H 2 O 2 concentrations.
The class III POD activity levels were higher in Xcc-infected grapefruit and sweet orange leaves 1 days after inoculation (Kumar et al., 2011b,d), which was similar to that in kumquat.Three isoforms (POD 1, 2 and 3) were detected in control and infected leaves of both genotypes with higher intensity of all three bands in infected tissues.In a separate study of Xcc infected sweet orange, POD genes were shown to be up-regulated as early as 6 hours after inoculation (Cernadas et al., 2008).More than 70 isoforms of PODs have been identified in plants and it is currently difficult to assign a physiological function to each one due to gene redundancy (Sasaki et al., 2004).Nevertheless, it is interesting that unlike CAT and APOD where there was a differential response in susceptible (grapefruit and sweet orange) and resistant (kumquat) genotypes, POD activity in all three genotypes increased in response to Xcc.
Proposed model of citrus response to canker
A comparison of Xcc population, symptom development, H 2 O 2 , and activities of enzymes involved in H 2 O 2 metabolism between the resistant genotype kumquat and a susceptible genotype such as grapefruit can reveal deficiencies in susceptible genotypes.Although similar concentrations of Xcc were injected in leaves of both genotypes, the population was 10x less in kumquat than grapefruit by 3 days after inoculation and remained substantially lower.Activity of chloroplastic Fe-SOD, an organelle that is presumed to be involved in pathogen sensing and signaling, increased 1 day after inoculation in kumquat but 2 days after inoculation in grapefruit, which indicates a delayed response in the latter genotype.The reduced Xcc population in kumquat compared to grapefruit was due, in part, to lower H 2 O 2 .Although H 2 O 2 increased in both species upon infection, it was only 1/3 the concentration in grapefruit than kumquat at its peak 5 days after inoculation.The sustained H 2 O 2 concentration in kumquat was due to higher and sustained Mn-SOD activity and lower CAT and APOD activities.In grapefruit, however, CAT increased 1 day after inoculation, APOD increased 3 days after inoculation, and Mn-SOD declined 5 days after inoculation.There are reports which showed that Xanthomonas spp.are naturally very resistant to O Watersoaking developed earlier in grapefruit (2 days after inoculation) than kumquat (4 days after inoculation).Water soaking is a characteristic symptom of Xcc infection in citrus that is caused in part by increased uptake of water through capillary action as a consequence of loss of intercellular space between rapidly dividing and enlarging mesophyll cells (Khalaf et al., 2007;Popham et al., 1993).The earlier watersoaking of grapefruit and the higher raised epidermis is indicative of increased cell growth in this genotype, which was reflected in the observed raising of epidermis compared to kumquat.It is interesting that POD activity in both genotypes was elevated upon Xcc infection.Peroxidase serves a dual role of promoting cell enlargement by loosening the cell wall but is also involved in cross-linking of cell wall components during cell maturation, a process that inhibits cell enlargement (Passardi et al., 2004).Which process that occurs would be substrate dependent and would vary temporally and spatially.Such a temporal and spatial variation in POD activity has been shown to occur during cell growth of Arabidopsis thaliana leaves where cell enlargement was promoted early and cell wall stiffening occurred later (Abarca et al., 2001).The changes in CAT, APOD and Mn-SOD that lowered H 2 O 2 concentrations in grapefruit preceded the raised epidermis and thus it is reasonable to assume that the concentrations of H 2 O 2 were necessary to promote cell enlargement in this genotype, whereas the higher concentrations of H 2 O 2 that occurred in kumquat were excessive and involved in suppression of Xcc.Thus, we propose that the lower H 2 O 2 concentrations in grapefruit promoted plant cell growth whereas the higher H 2 O 2 concentrations in kumquat were involved in cross linking of cell wall polymers and possibly the production of OH . .Solutions to solving Xcc in susceptible citrus genotypes such as grapefruit and sweet orange will need to include promoting earlier, higher, and sustained H 2 O 2 concentrations.
The comparative studies of oxidative metabolism in susceptible and resistant genotypes to Xcc have identified deficiencies in susceptible genotypes.Altering their response either through exogenous applications of chemicals that evoke systemic acquired resistance and induced systemic resistance or through genetic modification should be a focus of future research.In particular, stimulation of Mn-SOD activity, which is important for sustained production of H
Fig. 2 .
Fig. 2. Proposed mechanism of oxidative metabolism that promotes disease resistance in kumquat.Changes in enzyme activities and H 2 O 2 concentration taken from Kumar et al. 2011c,e.
Fig. 3 .
Fig. 3. Proposed mechanism of oxidative metabolism in grapefruit that promotes population growth of Xcc.Changes in enzyme activities and H 2 O 2 concentration taken from Kumar et al. 2011b,d.
. Early after infection, elevated concentrations of H 2 O 2 serve as diffusible signals to induce defense genes in adjoining cells with the later elevated concentrations serving in the direct inhibition of pathogens (Alverez et al., 1998; Dat et al., 2000; Lamb and (Sasaki et al., 2004;Quiroga et al., 2000)01) and use ascorbate as a substrate as part of the glutathione-ascorbate cycle(Foyer et al., 2009).Ascorbate peroxidase is ubiquitous throughout the cell and thus is important in catalyzing H 2 O 2 that is produced as a waste product of different metabolic pathways(Mittler, 2002).The importance of APOD in disease resistance has been shown in transgenic tobacco transformed with antisense cAPX (Nicotiana tabacum cv Bel W3) that exhibited PCD accompanied by fragmentation of nuclear DNA after being challenged with Pseudomonas syringae pv.tabaci, Pseudomonas syringae pv.phaseolicola NPS3121 and Pseudomonas syringae pv.syringae(Mittler et al., 1999;Polidoros et al., 2001).The use of guaiacol as a substrate to test peroxidase activity is limited to the Class III peroxidases (POD) that are characterized by secretion into the apoplast and utilize phenolic compounds as substrates to cross-link cell walls during cell maturation (De Gara, 2004;Liszkay et al., 2003;Sasaki et al., 2004).During infection, the class III PODs promote lignification, suberization, cross-linking of cell wall proteins, and phytoalexin synthesis to sicken metabolism and isolate the pathogen(Sasaki et al., 2004;Quiroga et al., 2000).The peroxidative cycle of POD uses H 2 O (Liszkay et al., 2003;Martinez et al., 1998)ading to necrotic lesions upon challenge with Pseudomonas syringae pv.tabaci(Mittler et al., 1999). 2 as an oxidant to convert phenolic compounds to phenoxy radicals that spontaneously combine to form lignin responsible for cell wall stiffening(Liszkay et al., 2003;Martinez et al., 1998).
(Kumar et al., 2011e)es, 1977)l., 2011e;Rusterucci et al., 1996) turn are toxic to plant and bacterial cells and is consistent with PCD as part of the HR to pathogens(Gobel et al., 2003;Kumar et al., 2011e;Rusterucci et al., 1996).It is interesting that using the injection method, kumquat did not display much swelling of the epidermis, which is required for egress of Xcc to the leaf surface.Kumar et al., (2011c,e) concluded that the retention of bacteria in the leaf coupled with early leaf abscission, which occurred from days 10 through 12, is consistent with a disease avoidance mechanism.Xcc interaction is supported by differential expression of related genes(Khalaf et al., 2007).Although Fe-SOD activity initially surged, high concentrations of H 2 O 2 have been shown to deactivate Fe-SOD(Giannopolitis and Ries, 1977), which is consistent with suppression of Fe-SOD activity after the first day(Kumar et al., 2011e).
(Liu et al., 2007;Zurbriggen et al, 2009)hrough SOD activity.Kumar et al. (2011e)showed that total SOD activity demonstrated two peaks during the course of Xcc infection of kumquat with peaks at 1-2 days after inoculation and 6-8 days after inoculation, although the total SOD activity was always higher than the uninfected controls.Analysis of the activity and isoforms of the various SODs were shown to be altered indicating compartmentalization of H 2 O 2 production (Kumar et al, 2011c,e).The first peak in total SOD activity was associated with a rapid increase in Fe-SOD activity to 2x the controls by 1 day after inoculation, but the activity dropped rapidly near or below the controls thereafter.Fe-SOD is compartmentalized in chloroplasts and studies on other plant-pathogen interactions have shown that chloroplasts are an important source of ROS signals that initiate changes in oxidative metabolism in other cellular compartments(Mur et al., 2008;Zurbriggen et al, 2009).Cu-Zn-SOD is also found in the chloroplasts(Alscher et al., 2002), butKumar et al. (2011e)found no activity of this SOD isoform during the kumquat-Xcc interaction.Mitogenactivated protein kinase (MAPK), which respond to external stimuli, are activated in plantpathogen interactions and promote ROS generation in chloroplasts by inhibiting CO 2 assimilation that serves as a sink for ROS generated by light(Liu et al., 2007;Zurbriggen et al, 2009).Evidence that this mechanism functions during kumquat- (Kumar et al., 2011c))u et al., 2010)s the importance of mitochondria in generating ROS to promote PCD(Mur et al, 2008;Yao et al., 2002).Thus, the elevated H 2 O 2 concentration during kumquat-Xcc interaction is promoted by SOD activity, first in the chloroplast and thereafter in the peroxisome and mitochondria.Thus, the sustained production of H 2 O 2 in peroxisomes and mitochondria indicates that these organelles serve as important generators of H 2 O 2 during kumquat-Xcc interactions.The fate of H 2 O 2 in kumquat-Xcc interaction is determined, in part, by enzymes involved in its dismutation.Catalase is considered the major H 2 O 2 scavenging enzyme and is located in peroxisomes of plant cells(Kamada et al., 2003;Hu et al., 2010).During kumquat-Xcc interaction, total CAT activity remained similar to the controls up to 5 days after inoculation but declined starting 6 days after inoculation to almost half of the controls(Kumar et al., 2011c).Interestingly, CAT demonstrated qualitative and temporal changes in isoforms(Kumar et al., 2011c).Plants have been shown to contain three CAT genes that code for three subunits and generate at least six isoforms that are classified into three classes(Hu et al., 2010).Class I CATs are abundant in tissues that contain chloroplasts, Class II CATs are mainly expressed in vascular tissues, and Class III CATs are generally found in young and senescent tissues.In uninfected kumquat leaves, Kumar et al. (2011c) identified 4 CAT isoforms (CAT 1-4) that appeared to be constitutive and therefore belong in Class I and II.(Tondo et al., 2010).Thus, it appears that the reduced plant CAT activity, which occurred during the stationary phase of Xcc population growth, was too late to directly impact the pathogen.Perhaps molecular modification that increasing CAT activity earlier in kumquat would suppress Xcc concentrations further by allowing H 2 O 2 concentrations to increase during the log phase of Xcc growth(Chaouch et al., 2010).CATs are limited to peroxisomes, it appears that this organelle serves an important role in canker resistance by elevating H 2 O 2 concentrations that diffuses to the rest of the cell and thus could become a promising site for resistance enhancement in susceptible citrus by genetic engineering of CAT gene expression or by post-translational modification of CAT proteins(Chaouch et al., 2010).Xcc inoculation to less than half the activity of the controls by 12 days after inoculation(Kumar et al., 2011c).The immediate and increasing decline in APOD activity is an adaptive plant response to help promote elevated H 2 O 2 concentrations throughout the sympast and is the principle enzyme that allowed H 2 O 2 concentrations to increase in infected kumquat.There is evidence that higher H 2 O 2 concentrations inactivate APODs at both the transcriptional and post-transcriptional levels(Zimmermann etal., 2006; Paradiso et al., 2005).Higher H 2 O 2 concentrations rather than O 2 .-in the symplast is interesting because it is a less reactive ROS, which may indicate another role for H 2 O 2 than promoting senescence alone.Xcc are only found in the apoplast and any positive effect of higher H 2 O 2 concentrations would require diffusion out of the symplast.H 2 O 2 in the apoplast would allow it to serve as a substrate for the Class III PODs.During normal metabolism of uninfected plants, H 2 O (Cernadas et al., 2008)assardi et al., 2005)t 4 days after inoculation, and CAT-4 declined starting at 10 days after inoculation, probably due to termination of all metabolic activity because of necrosis.A novel CAT isoform, CAT-5, was expressed 4 days after inoculation, and appears to belong to Class III since senescence as indicated by chlorosis rapidly developed at this time.There was no evidence of CAT-6.The decline in CAT activity coincided with the highest concentrations of H 2 O 2 but during the stationary phase of Xcc population growth(Kumar et al., 2011e).Xcc during the log phase of growth in kumquats is highly susceptible to H 2 O 2 with almost no survival upon exposure to 1 mM H 2 O 2 in comparison to stationary phase populations that can resist up to 30 mM of H 2 O 2(Tondo et al., 2010).H 2 O 2 increased to almost 10 mM (Kumar et al., 2011c,e), which was high enough to restrict Xcc during the log phase but not enough to impact bacterial populations during the stationary phase of growth(Tondo et al., 2010).The Xcc stationary phase populations were able to resist higher external H 2 O 2 concentrations due to high bacteria CAT activity via the expression of four CAT genes (katE, catB, srpA, and katG) reduced CAT expression exhibited necrotic lesions and displayed elevated concentrations of pathogenesis-related proteins in tobacco (Nicotiana tabacum cv.Bel w3;Mittler et al., 1999). 2 is utilized by the Class III PODs to promote loosening of cell walls during cell enlargement and to cross-link cell wall polymers during cell maturation (de Gara, 2004).The Class III PODs are also an adaptive defense mechanism against pathogens since the cross linking of cell wall polymers diminishes their ability to enzymatically digest the cell wall and thus isolates the pathogen in a confined area(Bradley et al., 1992;Passardi et al., 2005).Kumquat POD activity tripled 1 day after inoculation with Xcc and continued to increase to 8 days after inoculation(Kumar et al., 2011c).No canker development occurred beyond the initial infection zone as evidenced by water soaking upon injection indicating isolation of the bacteria consistent with activity of the Class III PODs.No up-regulation of POD has been shown for kumquat, but transcriptional analysis has shown up-regulation of POD genes in sweet orange leaves 2 days after inoculation with Xcc(Cernadas et al., 2008).
2 O 2 , and suppression of CAT and APOD activity to maintain high concentrations of H 2 O 2 in susceptible genotypes should improve resistance to Xcc.Strategies that improve H 2 O 2 metabolism to enhance resistance should provide new cultural management approaches in commercial groves for reducing the economic impact of this disease.Populatin concentrations are shown as the ratio of kumquat and grapefruit y Symptom classification: C= chlorosis, W= watersoaking, E= raised epidermis, N= necrosis x Enzyme classification: SOD= superoxide dismutase and their various forms as indicated by their metal cofactor, CAT= catalase, APOD= ascrobate peroxidase, and POD= the class III peroxidase xThe arrows indicate the ratio in Xcc population between kumquat and grapefruit Fig. 1.Comparison of Xcc population, canker symptoms, H 2 O 2 , and activities of enzymes involved in H 2 O 2 metabolism for kumquat (K) and grapefruit (G) by days after inoculation (dai).Arrows for H 2 O 2 and enzyme activities indicate a comparison of Xcc-infected to uninfected leaves.Data were taken from Kumar et al., 2011b,c,d,e. | 2017-09-17T02:46:33.262Z | 2012-05-02T00:00:00.000 | {
"year": 2012,
"sha1": "43a7e74f5f19639c75424a6564c6b6596155e98b",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.5772/33853",
"oa_status": "HYBRID",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "43a7e74f5f19639c75424a6564c6b6596155e98b",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Biology"
]
} |
257806745 | pes2o/s2orc | v3-fos-license | Association between intraoperative dexmedetomidine and all-cause mortality and recurrence after laparoscopic resection of colorectal cancer: Follow-up analysis of a previous randomized controlled trial
Background Dexmedetomidine (DEX) has been widely applied in the anesthesia and sedation of patients with oncological diseases. However, the potential effect of DEX on tumor metastasis remains contradictory. This study follows up on patients who received intraoperative DEX during laparoscopic resection of colorectal cancer as part of a previous clinical trial, examining their outcomes 5 years later. Methods Between June 2015 and December 2015, 60 patients undergoing laparoscopic colorectal resection were randomly assigned to the DEX and control groups. The DEX group received an initial loading dose of 1μ/kg before surgery, followed by a continuous infusion of 0.3μg/kg/h during the operation and the Control group received an equivalent volume of saline. A 5-year follow-up analysis was conducted to evaluate the overall survival, disease-free survival, and tumor recurrence. Results The follow-up analysis included 55 of the 60 patients. The DEX group included 28 patients, while the control group included 27 patients. Baseline characteristics were comparable between the two groups, except for vascular and/or neural invasion of the tumor in the DEX group (9/28 vs. 0/27, p = 0.002). We did not observe a statistically significant benefit but rather a trend toward an increase in overall survival and disease-free survival in the DEX group, 1-year overall survival (96.4% vs. 88.9%, p = 0.282), 2-year overall survival (89.3% vs. 74.1%, p = 0.144), 3-year overall survival (89.3% vs. 70.4%, p = 0.08), and 5-year overall survival (78.6% vs. 59.3%, p = 0.121). The total rates of mortality and recurrence between the two groups were comparable (8/28 vs. 11/27, p = 0.343). Conclusion Administration of DEX during laparoscopic resection of colorectal cancer had a nonsignificant trend toward improved overall survival and disease-free survival. Clinical Trial Registration http://www.chictr.org.cn/, identifier ChiCTRIOR-15006518.
Introduction
Surgical resections are the major treatment for most solid tumors and are associated with patients' long-term functionality and quality of life. Perioperative treatment has shown great potential for influencing postoperative outcomes of cancer patients. For instance, intraoperative local anesthetic infusion would increase cancer-specific mortality in colon resections (1), and propofol-based total intravenous anesthesia was associated with better overall survival compared to volatile anesthesia in oncological patients (2). However, the effect of different anesthesia methods and anesthetics on the long-term prognosis of oncological patients remains controversial (3)(4)(5).
In recent years, dexmedetomidine (DEX), a highly selective alpha2 adrenoceptor agonist, has been widely applied in clinical anesthesia settings, including in oncological patients (6)(7)(8). However, whether DEX is reasonably used in tumor resections remains controversial. Some recent investigations suggested that DEX could promote tumor cell proliferation (9-11), metastasis, and migration in vitro (12,13), and even decrease the overall postoperative survival in oncological patients who underwent lung resections (14), whereas others found that DEX would attenuate tumor cell metastasis and progression in the perioperative period (15)(16)(17). Regarding these controversial reports, there is still a notable lack of high-quality clinical studies to clarify the effects of DEX on the long-term prognosis of cancer patients.
In a previous study, we examined the immediate effects of administering DEX during elective laparoscopic resection of colorectal cancer. The findings indicated that DEX improved postoperative gastrointestinal motility function and resulted in more stable hemodynamics throughout the surgery (18). In the current study, we conducted a 5-year follow-up analysis of the same cohort to investigate the impact of intraoperative DEX on long-term survival and tumor recurrence following laparoscopic resection of colorectal cancer.
Methods
The present study was carried out in accordance with the Declaration of Helsinki and was approved by the Institutional Review Board of the Third Affiliated Hospital of Sun Yat-Sen University (approval number: [2015]02-95-02). The study was registered on the Chinese Clinical Trial Registry (www.chictr.org) on June 7, 2015 (registration number: ChiCTRIOR-15006518). The trial protocol, design, and short-term outcomes of the randomized double-blind clinical trial have been reported previously (18).
A total of 60 patients undergoing elective laparoscopic colorectal resection at the institution (The Third Affiliated Hospital, Sun Yat-Sen University, China) between June 2015 and December 2015 were randomly assigned to the DEX group and the control group. All patients were operated on under the same general anesthesia protocol as described previously (18). All surgical procedures were performed by the same surgical group. In the DEX group, a loading dose of DEX (1 mg/kg) was given before induction for 10 min, followed by continuous intraoperative infusion (0.3 mg/kg/h). The patients in the control group were given the same volume of saline instead. Patients who met the following criteria were excluded in our previous research: gastrointestinal motility disorder; abdominal surgery history; bradyarrhythmia including sick sinus syndrome, sinus bradycardia or atrioventricular block; long-term administration of sedatives; psychiatric or neurologic comorbidity; hepatic or renal dysfunction; or distant metastasis.
A follow-up analysis of postoperative mortality and tumor recurrence was conducted in November 2021. Medical records were extracted from the hospital information system (HIS), and telephone follow-ups were utilized to access patient information. Patients who had benign lesions, non-malignant polyps, or Stage IV metastatic disease were not included in the follow-up analysis. Survival rate was calculated from the date of surgery until the date of death resulting from any cause. The duration of disease-free survival was measured from the date of surgery to the date of recurrence or death due to any cause. All-cause mortality was defined as death by any cause, while cancer-specific mortality was defined as death due to metastatic progression. The types of recurrence were classified as locoregional or distant. The duration between the date of surgery and the date of recurrence was defined as the time to recurrence. Patients with no evidence of recurrence at the time of death were censored on the date of patients' death, while patients who remained alive at the time of analysis were censored at the end date of the follow-up period.
Baseline characteristics compared between the two groups included age, gender, body mass index (BMI), American Society of Anesthesiologists Physical Status Classification (ASA grade), type of operation, American Joint Committee on Cancer (AJCC) stage, tumor pathology, and adjuvant chemotherapy treatment. To ensure that recorded postoperative complications up to 30 days after surgery were comparable in both groups, specific complications were defined according to the criteria shown in Table 1 (19). The Clavien-Dindo classification system (20) was used to grade postoperative complications. If a patient experienced multiple complications, the highest grade was considered for analysis.
Statistical analysis
Statistical analysis was conducted using SPSS 19.0 software (SPSS Inc., Chicago, IL). One-sample Kolmogorov-Smirnov test was performed to assess the normality of the quantitative data. Mean ± standard deviation (SD) was used to describe quantitative variables that followed a normal distribution, and the T-test was utilized to compare the differences between groups. Categorical data or data without normal distribution were presented as median (interquartile range) or counts and compared by Fisher's exact test for categorical variables or otherwise by Mann-Whitney U test. Survival differences between groups were assessed by Kaplan-Meier curves and analyzed using the Mantel-Cox test. Statistical significance was defined a priori as a p-value < 0.05.
Results
Out of the 60 patients, 55 from the previous randomized clinical trial were included in the follow-up analysis. Five subjects were excluded from the analysis because of metastatic tumor at the time of operation. In total, 28 patients received intraoperative DEX, while 27 received the same dose of saline.
Baseline characteristics of the study population
Baseline characteristics between the two groups are listed in Table 2. Demographic characteristics were comparable between the two groups in age, gender, height, weight, BMI, ASA grade, operation type, tumor stage, and adjuvant chemotherapy treatment. The majority tumor type was adenocarcinoma at stages II or III. All patients underwent R0 resection. Tumor differentiation between the two groups was comparable. However, there was a significant difference between the two groups in vascular and/or neural invasion of the tumor, with more patients in the DEX group having vascular and/or neural invasion of the tumor (9/28 vs. 0/27, p = 0.002). There were no significant differences in either the grade or type of postoperative complications observed between the groups (Table 3).
Primary and secondary outcomes
By the time of analysis, the median duration of the follow-up was 5.3 years (1.72-5.58 years) in the control group and 5.47 years (5.24-6.03 years) in the DEX Group (Table 4). The primary outcome, the overall survival, is shown in Figure 1. The study did not demonstrate a statistically significant benefit for overall survival in 5 years, but rather a trend towards an increase in survival of the DEX group, which was demonstrated by relatively higher Consistently, the all-cause mortality (6/28 vs. 11/27, p = 0.121) and cancer-specific mortality (5/28 vs. 10/27, p = 0.110) in the DEX group were relatively lower during the follow-up period, though there were no significant differences (Table 4). Meanwhile, compared with the control group, there was a trend toward a lower rate of tumor distant recurrence in the DEX group (4/28 vs. 8/27, p = 0.205). The total rates of mortality and recurrence between the two groups were comparable (8/28 vs. 11/27, p = 0.343), as well as the rate of locoregional recurrence (3/28 vs. 2/27, p = 1.000). Moreover, there was no significant difference in the time from operation to recurrence between the two groups (1.08 (0.79) years vs.1.11 (0.97) years, p = 0.95).
Discussion
This study tried to analyze the follow-up of the patients involved in a previously published randomized controlled trial who were operated on for colorectal cancer and who had DEX during the surgical procedure. We compared the long-term outcomes of patients who had DEX vs. those who had saline instead, after 5 years of follow-up. The results showed a nonsignificant trend toward improved overall survival and disease-free survival in the DEX group compared with the control group. The total rates of mortality and cancer recurrence between the two groups were comparable. However, the postoperative pathological results showed a significant difference in vascular and/or neural invasion of the tumor, there were more patients having vascular and/or neural invasion of the tumor in the DEX group. Patients receiving DEX had relatively lower all-cause mortality, cancer-specific mortality, and rate of distant recurrence, though not statistically different. However, the sample of the study was too small to get such results and conclusions. It would be more significant to wait and add more patients.
Complication type
Being one of the most effective treatments for most solid tumors, surgical resection has been reported to potentially promote tumor metastases by different mechanisms, including the increased risks of micro-metastasis and the formation of new metastatic foci when shedding tumor lesions. Stress-related immunity suppression, the trauma-related release of growth factors to facilitate tumor cell proliferation, attenuated inhibition of angiogenesis after primary tumor removal, and the complex effect of anesthetics have also been reported to be involved (2,(21)(22)(23)(24). The introduction of Enhanced Recovery After Surgery (ERAS) has prompted an increased focus among anesthesiologists on the impact of perioperative interventions on the long-term prognosis of cancer patients (25). There is growing evidence suggesting that perioperative care and different anesthetics can influence long-term oncological outcomes (26). For instance, it was suggested that patients who received propofol and sevoflurane in general anesthesia were associated with better overall survival than those who received desflurane alone (2). Although DEX has been shown to promote tumorigenesis in neurogliomas and lung carcinomas, breast cancer, and colon cancers (12,27), others suggested that DEX could lower the tumor weight and tumor burden in xenograft mice with ovarian cancer (28), and repressed esophageal cancer cell proliferation in vivo (29). Despite the controversial in vivo results, the effect of DEX on long-term survival and tumor recurrence after laparoscopic resection of colorectal cancer has not been evaluated in the clinical setting.
Being a widely applied anesthesia adjuvant drug, administration of DEX has appeared to be associated with lower mortality in cardiac surgery and demonstrated a trend toward reduced cardiac complications in non-cardiac surgery (30)(31)(32). In a previous study conducted by our team, it was demonstrated that administering DEX during the intraoperative period improved the recovery of gastrointestinal motility function following laparoscopic resection of colorectal cancer (18). Vascular and neural infiltrations are known to be ominous prognostic factors in the tumor. The presence of vascular and/or neural invasion is associated with worse 5-year cancer-specific survival and worse 5-year overall survival in stages III and IV patients (33, 34). Although more patients in the DEX group had neurovascular invasion, there was no significant difference between the two groups in survival and mortality. Surprisingly, it presented a trend toward an increase in overall survival and disease-free survival in the DEX group. The study suggested that intraoperative administration of DEX may have potential benefits for the long-term prognosis of patients undergoing laparoscopic resection of colorectal cancer, which is consistent with the results of its recent application in uterine cancer surgery (35), but contradictory to what is biologically plausible based on some in vivo evidence (27,36).
The contradictory findings could potentially be attributed to variations in the study subjects. It has been suggested that DEX may inhibit the hypothalamic-pituitary-adrenal (HPA) axis and reduce sympathetic activation (37). Surgical stress has been reported to activate the HPA axis and sympathoadrenal responses, which promote the expression of adrenoreceptors on T cells (38, 39), facilitate T cells to differentiate from Th1 into Th2 cells, thus altering the balance between the two subtypes, and result in inhibition of immune function (40,41). Increasing evidence confirmed that administration with DEX was associated with improved postoperative immunosuppression, as reflected by the increased CD4 + :CD8 + ratio and Th1:Th2 ratio (42,43), and the results were also confirmed in the patients with colorectal cancer (38, 44).
Notably, we found the incidence of postoperative complications within 30 days after surgery in our study to be lower than in other reports (1,26). We think this may be attributed to the superb technical skills of our gastrointestinal surgical team (45), who are devoted to applying total mesorectal excision with preservation of Denonvilliers' fascia (iTME) in laparoscopic colorectal resection, which has shown to improve postoperative urogenital function (46). This study has several limitations. Firstly, given that the initial randomized controlled trial was designed to detect postoperative intestinal function, The primary objective of the original study was not to assess long-term survival and cancer recurrence rates. Consequently, the sample size was limited, and the conclusions that can be drawn from this follow-up study are of restricted scope. As such, it should be noted that this study is exploratory in nature and serves to generate hypotheses for further investigation. A retrospective cohort study enrolling more patients who underwent laparoscopic resection of colorectal cancer could be conducted in the near future to confirm the current hypothesis. However, the inclusion and exclusion criteria, the dosage of dexmedetomidine, and the difference in surgical and anesthesia groups are all confounding factors that are difficult to control. Thus, it was difficult for us to expand the sample size for this study. A further multicenter randomized controlled study with a larger sample size would help to confirm the effects of dexmedetomidine on all-cause mortality and recurrence among patients who undergo laparoscopic resection for colorectal cancer. Secondly, we did not collect detailed information on the mediation and surgery history of the patients, and whether the patients in the control group also received DEX during the 5-year follow-up period was unclear; this might be another confounder. Despite its limitations, the initial randomized controlled trial design has enhanced the analysis in this study by ensuring subject randomization, which creates equivalent groups and minimizes the chance of significant confounding variables.
In summary, administration of DEX during laparoscopic resection of colorectal cancer had a nonsignificant trend towards improved overall survival and disease-free survival. The small sample size may limit statistically positive findings in the study. Studies with larger sample sizes should be developed to verify the results.
Data availability statement
The original contributions presented in the study are included in the article/supplementary material. Further inquiries can be directed to the corresponding authors.
Ethics statement
The studies involving human participants were reviewed and approved by Institutional Review Board of the Third Affiliated Hospital of Sun Yat-Sen University (approval number: [2015]02-95-02). The patients/participants provided their written informed consent to participate in this study.
Publisher's note
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher. | 2023-03-30T13:21:09.078Z | 2023-03-30T00:00:00.000 | {
"year": 2023,
"sha1": "4b598f5d8b55b4aab2e343a84791323bcfa10e0c",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "4b598f5d8b55b4aab2e343a84791323bcfa10e0c",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
24747493 | pes2o/s2orc | v3-fos-license | Gender Differences in the Clinical Characteristics of Psychotic Depression: Results from the CRESCEND Study
Objective To test whether there are gender differences in the clinical characteristics of patients with psychotic depression (PD). Methods Using data from the Clinical Research Center for Depression (CRESCEND) study in South Korea, we tested for potential gender differences in clinical characteristics among 53 patients with PD. The Psychotic Depression Assessment Scale (PDAS) and other psychometric scales were used to evaluate various clinical features of the study subjects. Independent t-tests were performed for normally distributed variables, Mann-Whitney U-tests for non-normally distributed variables, and χ2 tests for discrete variables. In addition, to exclude the effects of confounding variables, we carried out an analysis of covariance (ANCOVA) for the normally distributed variables and binary logistic regression analyses for discrete variables, after adjusting the effects of marital status. Results We identified more prevalent suicidal ideation (adjusted odds ratio [aOR]=10.316, p=0.036) and hallucinatory behavior (aOR=8.332, p=0.016), as well as more severe anxiety symptoms (degrees of freedom [df]=1, F=6.123, p=0.017), and poorer social and occupational functioning (df=1, F=6.265, p=0.016) in the male patients compared to the female patients. Conclusion Our findings suggest that in South Korean patients with PD, suicidal ideation, hallucinatory behavior, and anxiety is more pronounced among males than females. This should be taken into consideration in clinical practice.
INTRODUCTION
Psychotic depression (PD) is characterized by depression accompanied by both positive and negative psychotic symptoms. [1][2][3][4][5] Because the severity-psychosis hypothesis proposed that the psychotic symptoms resulted from the severity of depression, the specifier of 'with psychotic features' was confined to severe major depression in the Diagnostic and Statistical Manual of Mental Disorders 4th edition (DSM-IV). However, several studies have now demonstrated that psychotic symptoms can also accom-pany milder depressive states. Therefore, the specifier of 'with psychotic features' can now accompany mild, moderate and severe major depression, as well as dysthymia, according to the recently published DSM-5. [6][7][8][9] In addition to psychotic symptoms, several clinical features including psychomotor disturbance (agitation or retardation), anxiety symptoms, suicidal behavior, deficits in executive function, psychiatric comorbidity, and conversion to manic episodes have been reported to be more prevalent or greater in PD than in non-PD. 1,10,11) Gender has been regarded as a significant factor influencing depression rate, symptom profile, treatment response, and illness course in depression, especially in non-PD. Hence, some gender-specific symptom constellations of non-PD have been identified. For example, in non-PD, depressed men are less likely than depressed women to suffer from increased appetite, weight gain, anxiety, interpersonal sensitivity, and somatic complaints. [12][13][14] Although many disagree with the idea, the concept of male depression syndrome has been proposed in non-PD. 15) However, it appears that potential differences in the clinical features of men and women with PD have been studied to a much lesser extent. [16][17][18][19] Fennig et al. 16) showed that female patients with PD were characterized by more frequent fatigue, psychomotor agitation, and systematized and mood-incongruent delusions than males, whereas male PD patients were characterized by more frequent feelings of worthlessness than females. In their 6-year follow-up of community death registers in the Amsterdam Study of the Elderly (AMSTEL). Welham et al. 18) showed that male patients with affective psychoses had a younger modal age-at-first-registration than females. In addition, Deligiannidis et al. 19) reported that female gender was associated with more frequent comorbid anxiety disorders as well as more frequent hallucinations and delusions with disorganization. However, they found no significant gender difference in treatment response. Finally, findings by Schoevers et al. 17) suggested that the mortality risk in males with PD was greater than that of females.
The aim of the present study was to cast further light on potential gender differences in the clinical characteristics of patients with PD, based on an analysis of data from the South Korean Clinical Research Center for Depression (CRESCEND) study. 4,5,7)
Study Overview
As described elsewhere, 4,5,7) the CRESCEND study was the first large, prospective, observational clinical study of a nationwide sample of patients with depressive disorder in South Korea. The study subjects were recruited from 1,183 patients with first-onset or recurrent depressive disorder (major depression, dysthymia, and other non-specified depressive disorder), who were beginning psychiatric treatment, from January 2006 to August 2008, and were enrolled at one or other of the 18 participating centers in the CRESCEND study (16 university-affiliated hospitals and two general hospitals across South Korea). The CRESCEND study was approved by the institutional review board of The Catholic Medical Center (receipt number: CUMC07U001). All the study subjects gave written informed consent. Certified research coordinators collected and evaluated the demographic and clinical data of the study subjects under the supervision of clinical psychiatrists at each of the research centers.
Psychotic Depression
Following the proposals of Keller et al., 10) Lichtenberg and Belmaker, 20) and Østergaard et al., 1) we defined PD, regardless of severity, as depressive disorder accompanied by delusions and/or hallucinations. The inclusion criteria were as follows: (i) age ≥18 years; (ii) diagnosis of major depression, dysthymia, or other non-specified depressive disorder within DSM-IV 8) confirmed by a Structured Clinical Interview for DSM-IV Axis I Disorders (SCID-I); 21) (iii) recorded presence of definite delusions and/or hallucinations; and (iv) availability of the fully completed Hamilton Depression Rating Scale (HAMD) 22) and Brief Psychiatric Rating Scale (BPRS). 23)
The Psychotic Depression Assessment Scale (PDAS)
Since the HAMD mainly focuses on depressive symptoms rather than psychotic symptoms, it is primarily useful for evaluating the symptom severity of the depressive domain of PD. 22) Conversely, the BPRS predominantly concentrates on psychotic symptoms, and only one item covers depressive symptoms. 23) Therefore, the 11-item PDAS, which combines the 6-item melancholia subscale (HAMD-6) of the HAMD (depressive mood, guilt feelings, work and activities, psychomotor retardation, psychic anxiety, and general somatic symptoms items) and the 5-item BPRS-5 subscale (hallucinatory behavior, unusual thought content, suspiciousness, blunted affect, and emotional withdrawal items), has been developed to assess the overall severity of the entire PD syndrome. The clinical validity, responsiveness and unidimensionality of the PDAS have been demonstrated, and the total score is therefore a valid measure for the symptom severity of PD. 2,3,24) In addition, the PDAS has been validated for differential diagnosis of PD and non-PD. 4) With a cut-off value of one, the total score on the BPRS-5 subscale of the PDAS reliably differentiates PD from non-psychotic depressive disorder. 5) Moreover, the HAMD-6 subscale is regarded as a 'depression ruler' and valid scale for outcome measures in clinical trials of depression. 2,25) Most items of the HAMD are scored on a 0-4 point Likert scale, whereas all items of the BPRS are scored on a 1-7 point Likert scale. Therefore, when calculating the total score on the BPRS-5 and the PDAS in the present study, scores on the BPRS-5 items were converted from scores from 1-7 to scores from 0-4 using the following formula: (BPRS score−1)×2/3 in the PDAS. Similarly, when calculating the total score on the HAMD-6 and the PDAS, scores on the general somatic item (range 0-2) were multiplied by 2. 2,3) In the statistical analyses, each of the scores on the PDAS items were transformed to a dichotomous variable (score of 0=absence of symptom, score of 1-4=symptom present).
Other Psychometric Scales
Structured interviews including the HAMD, 22) BPRS, 23) Hamilton Anxiety Rating Scale (HAMA), 26) Clinical Global Impression of severity (CGI-s), 27) and Social and Occupational Functioning Assessment Scale (SOFAS) 28) were used to evaluate depressive symptoms, positive and negative symptoms, anxiety symptoms, global severity, and social function, respectively. With a formal consensus meeting to guarantee the accurate application of psychometric assessments, all the evaluators undertook a training program twice a year. In addition, self-questionnaires including the World Health Organization Quality of Life questionnaire-abbreviated version (WHOQOL-BREF) 29) and the Alcohol Use Disorder Identification Test (AUDIT) 30) were used to evaluate quality of life and alcohol use, respectively. All psychometric scales have been formally translated into Korean and validated as reliable assessment tools in the relevant Korean populations. [31][32][33][34][35][36] Higher scores on the HAMD, 22) BPRS, 23) HAMA, 26) CGI-s, 27) and AUDIT 30) represent greater severity for each of symptoms, whereas lower scores on the SOFAS 28) and WHOQOL-BREF 29) represent poorer social function and quality of life, respectively.
Statistical Analyses
The distributions of continuous variables were tested for normality by the Kolmogorov-Smirnov test. The significance of gender differences in demographic and clinical characteristics and assessment scale scores were evaluated using the independent t-test for normally distributed variables, the Mann-Whitney U-test for non-normally distributed variables and the χ 2 test for discrete variables. The effects of potential confounding variables on gender differences were adjusted by means of analysis of covariance (ANCOVA) for normally distributed variables, and binary logistic regression analysis for discrete variables. In the binary logistic regression analysis, the female group was defined as the reference category of the covariate. In our study we found that the proportion of unmarried individuals was significantly greater among males than among females (Table 1). Since it has been reported that marriage has a moderating effect on the prevalence and psychological consequences of depression. [37][38][39][40] To exclude such an effect, we treated marriage as a covariate in our ANCOVA and binary logistic regression analysis. Statistical significance was set at p<0.05. All statistical analyses were performed with PASW Statistics 18.0 for Windows (IBM Co., Armonk, NY, USA).
Comparison of Demographic and Clinical Characteristics
A total of 53 PD patients from CRESCEND met the inclusion criteria for this study. Table 1 shows a comparison of the clinical characteristics of male and female PD 175, df=1, p=0.816).
DISCUSSION
In summary, this study of PD patients from the CRESCEND study showed that, after adjusting for the effects of marital status, male PD patients were more likely to display suicidal ideation and hallucinatory behavior, had greater severity of anxiety symptoms, and poorer social and occupational functioning than female PD patiens.
Our finding of a higher prevalence of current suicidal ideation in men than women is consistent with the results of prior studies. 16,[41][42][43] In addition, it could be part of the explanation as to why mortality rates in PD appears to be greater in men than in women. 16) However, we found no significant gender difference regarding the history of attempted suicide.
Of the 11 items of the PDAS, only hallucinatory behavior differed according to gender, with a higher prevalence among male patients. Conversely, in the study of pharmacotherapy of psychotic depression (STOP-PD), Deligiannidis et al. 19) found that the prevalence of hallucinations and delusions with disorganization were higher in women than in men. Although female gender was significantly associated with divorced or widowed marital status in the study of them, the authors did not the effects of marital status on clinical presentations. Hence, the difference between the results of this study and that of Deligiannidis et al. 19) may be due to a potential confounding effect of marital status.
The greater severity of overall anxiety symptoms in men than in women detected in our study also appears to be inconsistent with Deligiannidis et al.'s finding 19) of more common comorbid anxiety disorders in women compared to men. As mentioned above, this discrepancy could be caused by a confounding effect of marital status. Furthermore, in their reanalysis of data from the Sequenced Treatment Alternatives to Relieve Depression (STAR*D) cohort, Cassano et al. 44) found a significant association between hallucinations and anxiety disorders, including posttraumatic stress disorder and panic disorder. Thus, in the light of the results from Deligiannids et al. 19) (hallucinations and anxiety more prevalent among women), Cassano et al. 44) (significant association between hallucinations and anxiety disorders in PD), and the results of the present study (higher prevalence of hallucinatory behaviour and higher severity of anxiety in men), we speculate that there may be a relationship between hallucinations and anxiety in PD, irrespective of gender.
The poorer social and occupational functioning in men compared to females detected in our study could be an epiphenomenon of the more severe psychopathology profile of the male patients (hallucinations, suicidal ideation and anxiety). Zaninotto et al. 45) reported that psychosis during the course of depression was significantly associated with poorer functioning in the areas of visual and verbal learning and execution. Thus, it may be that poorer cognitive functioning represents the explanatory link between the more severe psychopathology profile and the poorer social functioning in men with PD in the present study.
Our study has several limitations. Firstly, since our sample was small, the power to detect gender differences was limited. Secondly, because we did not adjust for multiple comparisons, the possibility of type II errors was increased. Thirdly, the study design was cross-sectional rather than longitudinal. Finally, since the rates of comorbid personality disorders and other mental disorders were not evaluated in the CRESCEND study, their potential effects on gender differences in clinical characteristics could not be taken into account. Despite these limitations, our study has the virtue of exploring gender differences in the clinical characteristics of patients with PD, a poorly investigated area. Our findings suggest that in South Korean patients with PD, suicidal ideation, hallucinatory behavior, and anxiety is more pronounced among males than females. This should be taken into consideration in clinical practice.
This study was supported by a grant from the Korea Healthcare Technology R&D Project, Ministry of Health and Welfare, Republic of Korea (Grant No. HI10C2020). The Ministry of Health and Welfare had no role in the study design; in the collection, analysis, and interpretation of data; in the writing of the report, or in the decision to submit the paper for publication. | 2018-04-03T05:49:58.263Z | 2015-12-01T00:00:00.000 | {
"year": 2015,
"sha1": "05e29fe468567437d2032845fcbd68a0a3c76afc",
"oa_license": "CCBYNC",
"oa_url": "http://www.cpn.or.kr/journal/download_pdf.php?doi=10.9758/cpn.2015.13.3.256",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "05e29fe468567437d2032845fcbd68a0a3c76afc",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
17548525 | pes2o/s2orc | v3-fos-license | A Short Motivational Program Based on Temporary Smoking Abstinence: Towards Increased Self-Efficacy to Quit in Psychiatric Inpatients
Background: Specific approaches for smokers presenting with psychiatric disorders are scarce, even though the prevalence of smoking does not tend to decline in mental health settings, in contrast with general populations of most western countries. Methods: Inpatient smokers (n=69) in a public mental health hospital participated in a multicomponent motivational intervention based on a temporary 26 h abstinence period. Evaluations, performed 1 week pre-, during and 1 week post-intervention, included cigarette consumption, carbon monoxide level, stage of change, craving, as well as anxiety, depression, well-being and smoking cessation self-efficacy. Results: Carbon monoxide level significantly decreased during the intervention (median 16 to 6 ppm, p<0.001), with 76.8% of participants using nicotine replacement therapy. Craving decreased (MPSS 5 to 4, p=0.01), together with anxiety (STAI-State 47 to 38, p<0.001) and depression (BDI-21 18 to 13, p<0.001), whereas well-being increased (WHO-5 11 to 16, p<0.001). During the proposed 26 h abstinence period, 45.6% of participants successfully abstained from smoking, of which 58.1% subsequently attempted quitting. Ten participants (14.5% of 69) decided to stop smoking even though they had no intention to quit before the program. Self-efficacy for permanent cessation did not change, but self-efficacy for temporary abstinence increased (median 8 at preto 9 at post-evaluation, p=0.003). Conclusion: A short multicomponent motivational intervention based on temporary abstinence can be a positive experience for patients with severe psychiatric disorders, contribute to increase self-efficacy and trigger quit attempts. The present study suggests that integration of such a program in mental health care is feasible and wellaccepted.
Introduction
Smoking and poor mental health appear tightly entangled, as smoking prevalence, morbidity and mortality are clearly higher in psychiatric patients than in the general population [1,2]. Hospital stay can be considered as an opportunity to treat tobacco dependence and smoke-free environments requiring temporary abstinence are hypothesized to promote smoking cessation [3]. However, despite smoke-free policies, mental health settings still face tobacco-related difficulties. The number of smokers remains alarmingly high, and they seem to adapt to indoor smoking bans and continue smoking outdoors. Smoking rates decline less among individuals with mental health problems than in the general population [4][5][6].
Treating nicotine dependence in psychiatric patients is more difficult than in the general population, with a higher degree of dependence, frequent relapse often associated with psychiatric condition, and increased needs for support and nicotine replacement therapy (NRT) [7]. According to the trans theoretical model (TTM) or stages of change model, a large number of psychiatric patients are "precontemplators" who do not consider stopping [8]. The challenge is to help these smokers shift to the "contemplation" stage, introducing them to the idea of stopping. Special emphasis is needed on the largely neglected phase preceding the moment a smoker feels ready to engage in a quit attempt, or during initial phases of smoking history, when he does not consider himself as a smoker [9].
It is not uncommon to encounter barriers within the hospital staff when psychiatric patients express their intention to quit smoking. In our clinical practice, we observed that professionals sometimes advise patients not to quit because they fear of worsening psychiatric condition, for example when patients are not yet stabilized or just got stabilized after a severe episode. Onset of major depressive episodes on cessation have occasionally been reported in the literature and the hypothesis has been raised that smoking may act as self-medication against negative mood [10]. Anxiety and depression are part of usual nicotine withdrawal symptoms, underlining the very close interweaving between these elements and particular obstacles in the way to smoking cessation for psychiatric patients [11]. Together with established practice in psychiatric settings, this context partly explains why staff tends to neglect tobacco-related interventions [3].
To our knowledge, much support is available for smokers ready for a quit attempt, but there is a paucity of interventions for patients in the preceding, even more crucial stages, who are the majority of smokers in mental health care settings. To address this shortcoming, we developed a tailored intervention, consisting in a short 26 h smoking abstinence period, allowing patients to build up a positive experience and increase self-efficacy towards smoking cessation [12]. The objective of our study was to evaluate this intervention, by comparing data collected before and one-week after the program. We studied central issues related to smoking cessation, such as self-efficacy, motivational changes (TTM) and quit attempts. We also studied negative affects, to address the fears of potential increases of anxiety and depression, and positive affects, as we hypothesized that smoking abstinence and concomitant increase of well-being and self-efficacy might be important leverages towards cessation.
Setting and participants
The intervention was designed in 2010 and its feasibility and acceptability were assessed in an earlier report [12]. The present study ran between June 2011 and July 2014. Approximately 4 sessions per year were proposed to inpatients at the Department of Mental Health and Psychiatry, Geneva University Hospitals (capacity of about 100 beds, 5 acute care units, 2 rehabilitation units).
Inclusion criteria were as follows: (a) being a smoker (widely defined as having smoked at least one cigarette in the past 3 days), ready to commit to a 26 h smoking abstinence period ("not a puff"); (b) being hospitalized, independently of psychiatric diagnoses; (c) presenting a stable clinical condition, compatible with study participation according to the nursing team and the attending psychiatrist. Exclusion criteria were insufficient level of French and cognitive impairment.
The program (described below) was presented to all inpatients by means of posters and oral information by mental health care providers. Potential participants were further informed during a preliminary interview and invited to provide written informed consent before entering the study. The study protocol was approved by the ethics committee of the Department of Mental Health and Psychiatry, Geneva University Hospitals (reference number 11-057).
Intervention
The multicomponent intervention comprises three distinct features. The first one, aimed to increase self-efficacy, is the patients' commitment to succeed in not-smoking during 26 h. Emphasis is set upon succeeding: generous NRT is proposed and individual support is provided by staff with experience both in mental health and tobacco addiction. Additional boosting is provided by the group experience, the common challenge not to smoke, and a "diploma" for those who succeed. Repeated monitoring of expired carbon monoxide allows individual feedback on the on-going experience. A second essential component of the program relies on its positive content, i.e., activities aimed at enhancing relaxation and well-being (thermal baths or sport, music or occupational therapy) and showing that non-smoking is not necessarily painful or stressful. A third ingredient is smoke-related information during formal (sessions with tobacco specialists) and informal moments (support throughout the day).
The program begins on Thursday 8:30 am and involves group sessions, thermal baths, and lunch in a restaurant, interactive tobaccorelated information session, afternoon tea, music therapy, and final group meeting. Patients are back to their hospital units at 18:00. They return on Friday morning 8:30 am, have breakfast together and participate in a last group session. Nicotine replacement is provided as slow action products (patches of 7, 14 and 21 mg) and rapid action products (10 mg inhalers, 2 and 4 mg gums, 1 mg lozenges), with possible use of several products. A structured group setting is provided by having the same patients and staff together during the whole 26 h period (with the exception of the evening/night). The team comprises a medical doctor, a nurse, an occupational therapist and a psychologist, all working in inpatient psychiatric units. At each intervention, an additional staff member (preferably a smoker) joins the group. Tobacco specialists are in charge of the information session. The whole program is free of charge.
Study design
Participants were evaluated on 3 occasions, during individual interviews conducted by a psychologist (IK) and a trainee psychologist.
The first interview took place during the week preceding the intervention and allowed confirming participation, on the basis of clinical condition and acceptance of the program. It allowed patients to establish a personal contact with the coordinator and gain reassurance about their participation. Assessments included present characteristics of smoking, craving and motivation to quit, self-efficacy, anxiety, depression and well-being (described below), in addition to demographics and history of smoking. Clinical characteristics such as diagnosis and history of hospital stays were obtained from medical charts and discharge letters.
The second evaluation was performed immediately after the intervention (Friday morning), with the same instruments and satisfaction with the program.
The third assessment was made about one week after the program, with the same instruments as in the first 2 evaluations and a standardized substance use interview (details below). This last interview also provided an opportunity to encourage patients towards a cessation attempt and verify whether further specialized tobacco support was needed.
Expired carbon monoxide, use of nicotine replacement and possibly cigarettes were recorded on these 3 occasions and additionally on Thursday morning and afternoon, allowing monitoring of carbon monoxide (CO) on Thursday 9:15 am/Thursday 5:15 pm/Friday 10:00 am. Main outcome variables were success rates for temporary 9h and 26h smoking abstinence, and motivational or behavioural changes about smoking between pre-and 1 week post-intervention assessments.
Instruments
Baseline characteristics included demographics and illness-related variables (primary and comorbid diagnoses, number of hospital stays). Substance use was documented by searching medical charts and using the Alcohol, Smoking and Substance Involvement Screening Test (ASSIST, [13]).
Positive and negative affects were measured using questionnaires for state anxiety (State-Trait Anxiety Inventory, STAI, state part [14]), depression (Beck Depression Inventory, BDI-21 [15]) and well-being (WHO-5 Well-being Index [16]). Satisfaction with the program was assessed with the question "Globally, I rate this intervention as: not at Use of nicotine replacement and cigarette consumption were selfreported, with quit attempts considered as such if they exceeded 24h. Expired carbon monoxide (CO) was measured using a piCO+ Smokerlyzer (Bedfont, UK). Values of 0-6 ppm correspond to nonsmokers, 7-15 ppm to low-dependence smokers and >15 ppm to strongly addicted smokers. The Heaviness of Smoking Index (HSI [17,18]) was calculated based on two questions of the Fagerström Test for Nicotine Dependence (FTND): "How many cigarettes a day do you smoke?" (Answers in 4 categories) and "How soon after you wake up do you smoke?" (4 categories). Two questions from the Wisconsin Predicting Patients' Relapse questionnaire (WI-PREPARE [19]) were used to evaluate the time spent with smokers ("I'm around smokers much of the time") and craving ("When I haven't been able to smoke for a few hours, the craving gets intolerable") on 7 point rating scales. Craving was also measured using the total score of 2 items of the Mood and Physical Symptoms Scale (MPSS [20]): "How much of the time have you felt the urge to smoke today?" and "How strong have the urges been today?" (0=not at all/no urges to 5=all the time/extremely strong). Withdrawal symptoms were measured using the Minnesota Nicotine Withdrawal Scale (MNWS-R [21,22]) that includes 9 items (frustration, anxiety, depression, craving, concentration, appetite, insomnia, restlessness, irritability) rated on 4-point Likert scales (0=not present to 3=severe) and a total score. Readiness to stop smoking was evaluated using Kahler's Commitment to Quitting Smoking Scale and Biener's Contemplation Ladder [23,24]. Stage of change was categorized as pre-contemplation (no intention to quit), contemplation (intention to quit in the next 6 months), preparation (intention to quit in the next 30 days, with possibly a quit attempt in the past year), and action (stopped smoking in the past month) [25].
Two questions evaluated self-efficacy or perceived confidence in remaining abstinent from smoking, either on a temporary basis ("If I would participate in this program again, I think I would be able not to smoke during 26 h") or a permanent basis ("I'm convinced that one day, whatever happens, I will stop smoking"). Answers were given on 10 point scales (from "no, impossible" (1-2) to "yes, absolutely" (9-10)).
Data analysis
Frequency (% of valid cases) was used to describe categorical variables. Median (range) and mean (standard deviation, sd) were used for ordinal and continuous variables. Comparison of independent samples proceeded with the Mann-Whitney U test. Change over time was tested with the Wilcoxon signed-rank test (
Participation
After the first individual interview, 176 participants were admitted into the program, of which 110 met the study inclusion criteria. Reasons for exclusion were: second participation in the program (n=40); refusal to participate (4); cognitive impairment (5); worsening of clinical condition (12); language barrier (2); administrative reasons (2). One patient was excluded because he had stopped smoking 2 months before the intervention (he entered the program to reinforce motivation).
Nineteen patients were not present on the intervention day because of discharge from the hospital or worsened clinical condition. Four patients dropped out during the first morning because of acute symptoms or difficulty abstaining from smoking. Eighteen patients were not assessed at 1 week post-intervention, mainly because of hospital discharge. The study cohort thus comprised 69 patients assessed on 3 occasions (1-8 days before the program, second day of the intervention, 4-12 days after the program).
Baseline characteristics of participants
Participants were aged 17-64 (mean 34.3, sd 12.8). Sociodemographic and clinical characteristics are reported in Table 1. The sample comprised patients with severe psychopathology and social vulnerability (multiple diagnoses, several hospital stays, low education, living alone and depending upon disability pension or social aid). Only a minority had a principal ICD-10 diagnosis of substance-related disorders, but comorbidity was frequent. The sample had high level of nicotine dependence (mean HSI 3.3, sd 1.8) and long duration of tobacco consumption (mean 13.8 years, sd 10.4). At pre-intervention, mean level of carbon monoxide was 22.6 ppm (sd 16.8), whereas selfreported mean number of cigarettes per day over the past 6 months was 19.0 (sd 11.3). A majority were not yet actively planning to stop tobacco use (mean score on the Biener's 0-10 Contemplation Ladder 6.3, sd 2.8). More details of tobacco use are in Table 1. Table 1: Socio-demographic, clinical and smoking characteristics of psychiatric inpatients participating in a temporary smoking abstinence program (n=69).
Self-efficacy for temporary abstinence, as measured before the intervention, was significantly higher for patients who succeeded with 26 h abstinence than for the ones who failed (median 9, n=30, vs. median 7, n=37; Mann-Whitney U test, p=0.003). No significant difference was observed for self-efficacy for permanent abstinence.
Negative and positive affects
As shown in Figure 1
Temporary abstinence and quit attempts
Among the 31 patients who succeeded with the 26 h abstinence period, 58.1% (n=18) decided to extend it and attempt quitting. After one week, 11 were still totally abstinent and 4 had resumed smoking but still intended to stop. Among participants who did not maintain the 26 h abstinence period, none engaged into a quit attempt. Thus, prevalence of smoking abstinence after 7 days was 15.9% (11 of 69).
Negative and positive affects
Depression, anxiety and well-being at one week confirmed the positive effects of the 26 h intervention (Figure 1).
Behavioral and motivational changes
As described in Table 3, behavioural changes were reflected by a significant decrease of cigarette consumption and carbon monoxide level that persisted up to one week. Subgroup analysis revealed that carbon monoxide level did not change significantly for participants who did not try quitting (n=50, median 21.5, range 4-100 vs. 24, range 2-66; p=0.55), but significantly decreased both for those who successfully stopped smoking (n=11, median 10, range 1-33 vs. 3, 1-7; p=0.005) or unsuccessfully tried to quit (n=7, median 16, range 3-25 vs. 10, 4-13; p=0.05). Motivational changes, as measured by Kahler's commitment to quitting scale and Biener's contemplation ladder, were not significant. Nevertheless, a significant difference was observed for stage of change. Of 40 patients who were pre-contemplators before the intervention, 28 remained at the same stage 1-week after the intervention, but 12 reached contemplation or action stages. Of the 26 not in precontemplation before intervention, only 3 shifted back to precontemplation (Mc Nemar test p=0.04). Of 15 persons in "action" stage 1-week after the intervention, 6 were "pre-contemplators" at preevaluation, 7 were "contemplators" and 2 were "in preparation": unlike in the traditional TTM, 40% had thus shifted directly from precontemplation to action. Ten participants (14.5% of 69) with no intention to quit smoking at pre-evaluation, but successful with the 26h abstinence period, took the decision to stop (8 in continuation of the program and 2 taking an appointment for individual support in the coming week).
Self-efficacy
Self-efficacy to quit permanently did not change, but self-efficacy for temporary abstinence significantly increased (Table 3). More than half (52.2%) of the participants had higher scores after one week, whereas about one third (32.8%) remained unchanged. Subgroup analysis according to success of abstinence during the intervention day showed that self-efficacy for temporary abstinence significantly increased for participants who succeeded with 26 h abstinence (median 10 vs. 9, n=30; Wilcoxon signed-rank test, p=0.003), but not for the ones who only achieved 9 h abstinence or failed to do so. After one week, selfefficacy for temporary abstinence was significantly higher among patients who succeeded with the program (median10, n=31 vs. median 8, n=37; Mann-Whitney test, p<0.001).
Discussion
Results showed that the majority of smokers presenting with severe psychiatric disorders were able to comply with a 9 or 26 h smoking abstinence period, and that this experience did allow them to progress in motivational processes such as stage of change and self-efficacy. Thus, the main objective of the intervention was fully met. Furthermore, a quarter of patients made a quit attempt shortly after the program and 10% actually decided to stop smoking although this was not their intention before the intervention, suggesting that the program was able to trigger new perspectives.
The TTM model, postulating that step by step psychological changes lead to behavioral changes (stop smoking), although useful, has never been formally validated and about half of quit attempts seem to be unplanned [26,27]. In our study, some patients decided to stop even though they were in the pre-contemplation stage before the intervention. In the light of the TTM, one would expect smokers to have evolved in their cognitions to be ready for the action stage. Alternatively, one may postulate that behavioural events, such as temporary abstinence, might induce psychological changes and catalyse the stopping process. This is in agreement with the theoretical concept that positive consequences of an action (e.g. participation in the multicomponent intervention) are able to affect motivational processes that precede volitional or planning aspects of behaviour changes [28,29]. Intervention on motivational processes by means of positive experience seems crucial for the population targeted for our intervention, most of who are in pre-contemplation stage, not yet planning to stop smoking.
Several components of the proposed intervention are important. Firstly, generous NRT was proposed in order to reduce withdrawal symptoms. Our study indicated that craving (MPSS) was lower during the intervention than at pre-evaluation and that withdrawal symptoms significantly decreased (MNWS-R). This is of special interest, since the severity of withdrawal signs is generally highest within the first 3 h after cessation and decreases over time [30,31]. In our sample, only 16% failed to maintain abstinence for the first 9 h, suggesting that most smokers tolerated abstinence during that period. However, a minority felt that it was very difficult or uncomfortable to refrain from smoking; mainly the ones who refused NRT (e.g., increased auditory hallucinations in a schizophrenic patient not accepting NRT). Our clinical impression was that the large majority of participants felt fine, relaxed and proud about their participation, with acceptable cravingrelated discomfort.
A second important component is the positive and pleasurable experience associated with the intervention, as shown by high satisfaction and increased well-being. Better oxygenation due to abstinence, light exercise and relaxation (thermal baths) may contribute to this effect, in addition to psychological effects (e.g. satisfaction of succeeding with the personal challenge of not smoking during 26 h). The program was indeed designed to be pleasurable and attractive. It is well known that nicotine activates reward pathways in the brain and induces pleasurable sensations. It was hypothesized that the association of abstinence with similarly pleasurable moments might contribute to enhance readiness for change with respect to smoking behaviour. Furthermore, the program was meant to be appealing to psychiatric inpatients who might not be willing to engage in usual smoking cessation programs.
Social and relational aspects are other important elements of the program. Many patients were socially isolated and appreciated the group experience and support from other participants and staff. The individualized relationship established between staff and participants during the pre-intervention session seems crucial in helping them overcome their fears and enrol. During the intervention, it helps them overcome their anxiety about spending a day in an unknown environment.
Formal information about smoking is the fourth ingredient of the program. Participants learn new facts about tobacco and get individual feedback about their carbon monoxide measurements. For many, this was an opportunity to become aware of a directly measurable effect of smoking in their lungs.
Finally, the proactive characteristic of the intervention needs to be emphasized, with personal commitment of patients towards temporary abstinence and supporting but non-directive attitudes of the staff. A proactive intervention to motivate smokers to quit was associated with improved results as compared to usual smoking cessation care [32]. Furthermore, patient empowerment, i.e. helping them to take autonomous, informed decisions, might be a fundamental element in the process towards quitting [33].
These different components of the program most likely contribute together to the motivational change, increased self-efficacy, and decision to attempt quitting, as observed in the present study. A review of predictors of cessation attempts showed the importance of motivation and self-efficacy, even for smokers not currently willing to quit [34,35]. Our results are in keeping with this finding, with significantly higher self-efficacy in patients who attempted to stop in continuation of the program. We also observed increased self-efficacy for temporary abstinence during the intervention and hypothesized that it might contribute to the decisional process towards cessation in a psychiatric population of mostly pre-contemplators.
In contrast with the fears of many clinicians that smoking cessation might be accompanied with worsening of patients' condition, we observed a significant decrease of anxiety and depression during the intervention day that persisted up to one week later. Apart from somatic health benefits, growing literature points to improvement in mental health coupled with smoking cessation interventions [36]. In a randomized study, enrolment in tobacco cessation treatment showed a broad therapeutic effect with less frequent psychiatric rehospitalisation after 18 months [37]. A meta-analysis of 26 studies concluded that smoking cessation was associated with mental health benefits, with comparable effect sizes at 6 weeks, six months or longer [38]. A large epidemiological study also concluded that smoking cessation was associated with reduced risk for mood, anxiety and alcohol-related disorders, even among smokers with a pre-existing psychiatric disorder [39]. As mental and physical health is intertwined, interventions to promote smoking cessation deserve to be fully integrated within health care delivery systems [1]. Furthermore, techniques used in mental health care such as distress tolerance skills or working upon cognitive distortions might present combined benefits for smoking cessation too [40,41].
Limitations of the present study include the short duration of the intervention and follow-up period, in comparison with the long processes involved in smoking cessation, which may last for years if not decades. Research has shown that the first week of abstinence was highly predictive of long-term abstinence in non-psychiatric samples [42]. However, the long-term effects of the proposed intervention remain to be addressed. Another limitation is the absence of a control group, which will be needed in further studies aimed at confirming specific effects of the program. Indeed, all patients received psychiatric care which might have contributed to the decrease of symptoms one week after the intervention. The multicomponent structure of the intervention also renders the evaluation of single elements difficult.
Conclusion
Interventions for smokers presenting with severe psychiatric disorders are necessary and should target not only patients willing to quit but all smokers, independently of their motivation at any given time. The present study showed benefits of temporary smoking abstinence, as part of a global motivational intervention aimed at increasing self-efficacy and catalyzing the complex processes towards smoking cessation. | 2019-03-16T13:06:26.853Z | 2016-07-31T00:00:00.000 | {
"year": 2016,
"sha1": "77851c783cc5a932278f861909f23932968b38e2",
"oa_license": "CCBY",
"oa_url": "https://archive-ouverte.unige.ch/unige:90645/ATTACHMENT01",
"oa_status": "GREEN",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "0c314c6bad6c6d50ca441f94be0dae0162faf24e",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
234089436 | pes2o/s2orc | v3-fos-license | 2D Detection Model of Defect on the Surface of Ceramic Tile by an Artificial Neural Network
The ceramic tile visual inspection process is divided into three parts: texture classification, color classification, and surface defect detection. In its application in the industry is a difficult process, because it is done manually involving many workers and done in a noisy environment with differences in temperature and humidity. This study emphasized the quality control based on the visual inspection automation system on the detection of a defective type of ceramic surface. The process performed is capturing an image, image processing, feature extraction, training, testing, and classification. In the preprocessing process are image resizing, RGB color conversion, segmentation and feature extraction. Features extraction using Gray Level Co-occurrence Matric (GLCM). The generated feature will be an artificial neural network (ANN) input in training and testing using Matlab 2013a to detect more than one type of surface defect with good accuracy. The artificial neural network training uses backpropagation with network architectures 14 input features, 27 hidden layers, 1 output. The learning rate used 0.001, 75 data training, and 23 data testing. The position and type of defect can be detected with 83% accuracy and error rate of 17%. Maximum time detection 1,06 seconds and minimum time detection 0,1 seconds. therefore by using automation system inspection, inspection error caused humans could be ignored and increased quality and productivity in production.
Introduction
The national ceramics industry is still quite prospective in the long run along with the continued growth of the industrial market. Opportunities for the development of this sector are also supported by government programs in improving infrastructure and property and housing development, which are expected to boost national ceramic consumption. The production capacity of national ceramic flooring installed in 2016 amounted to 580 million square meters with the realization reaching 350 million square meters. With the current production capacity, around 87 percent is to meet the needs of the domestic market, and the rest is exported to countries in Asia, Europe, and America. The Indonesian ceramics industry can compete in the era of free trade and expand internationally. Strategic steps that must be carried out include strengthening industrial structure, improving the quality of human resources (HR), technological innovation through research and development, and infrastructure development. Huge demand for ceramics for infrastructure and buildings due to affordable prices, easy installation, and maintenance, various color variations, sizes, and features, as well as moisture resistance. This 2 encourages producers to produce products in large quantities in fast time by promoting quality through a visual inspection system. [1][2] divides the ceramic visual inspection process into three parts namely texture classification, color classification, and detection of surface defects. Its development proposes a more efficient method for inspection and classification of consistency of ceramic colors by building a Fuzzy neural network which is a combination of artificial neural networks with fuzzy techniques. The combination of these methods can increase accuracy by up to 98% compared to some previous methods such as histograms, LB center clusters, BP neural networks. [4][5] it was found that the results of the edge detection of objects using the LoG method are more detailed when compared to the Prewitt method, this is because the LoG method is more sensitive to the blur so that it can form the object area of damage properly. However, this research is less optimal in detecting damage in the form of bubbles and dark ceramic images, therefore, the image needs to be sharpened so that defective objects in ceramics can be detected properly.
In its application in the industry is a difficult process, because it is done manually involving a lot of workers and carried out in noisy environments with differences in temperature and humidity. It also consists of measurement variations such as color analysis, dimensional verification, and surface damage detection [6]. Identified the damage and classification of ceramics automatically by developing a training and detection algorithm with a probabilistic neural network, which obtained a 5% increase in accuracy and a reduction in computational time. With the development of increasingly fast computer hardware, now it is not only used to merely assist in human work but has begun to be operated as a decision support system that helps solve problems in the decision making process by individuals or groups. The visual inspection process is an important issue in competing with producers of the ceramic industry globally [7]. Visual inspection on ceramic surfaces (surface quality) is a parameter that can be directly observed both on the production line and by consumers. The visual inspection process in the production line is a bottleneck in increasing productivity so that in achieving this goal a lot of research is done to help in making a better system.
This study emphasizes the detection of more than one defective tile on the surface of ceramic tilebased computer vision using an artificial neural network. Types of a defect are shown in the following table 1. So that it can help producers improve production performance, reduce material losses and production costs, standardize quality, and eliminate aspects of operator subjectivity. Moreover, it can help provide a definite and standardized quality of ceramics with affordable prices and diverse variations to consumers.
Preprocessing
After captured image, furthermore, it is processing image through resizing image, Conversion RGB to Hue, Counting mean Hue, Counting deviation, counting deviation each of pixel square with mean hue and counting eulerian distance each of pixel with mean hue. Furthermore, The image is segmented thresholding with 0,134. Than image morphology with bwareaopen and dilation. Specified Areas of defect use labeling and cropping. Labeling is used to provide the name of a defect than cropping is used to compute the area of defect. Furthermore extracting feature process to get input data for training within a neural network.
Step diagram can be shown in figure 2.
Feature Extraction
Feature extraction is performed on the defective surface after cropping. It uses Gray Level Cooccurrence Matric (GLCM). There is a fourteen variant which is extracted from a defective image. It consists of defective area, var R, var G, var B, means x, means y, var x, var y, contrast, dissimilarity, homogeneity, correlation, entropy, energy.
Data training
In Artificial Neural Network Training using 14 inputs obtained from features extraction with one output. The network used for the training process is resilient backpropagation (trainrp). Training is conducted with the number of hidden layers from 23 to 28 because the mean square error (MSE) target is below 0.01 then the hidden layer is taken from 23.The number of hidden layers is obtained from a trial in which the number of hidden layer 27 produces MSE 0.03 with 355 epoch in table 2 In Table 3, training is conducted with a variety of learning rate to find the smallest error value with a hidden layer of 27. The hidden layer is taken from the previous training process. The smallest error value obtained at the learning rate of 0.001 is 0.024 with the number of epochs. The value of the hidden layer and the value of the learning rate are used as a reference in conducting the training process with resilient backpropagation.
Testing and Classification
Testing is used to measure the accuracy of the system in recognizing the given input to produce the correct output. Testing data use 23 test data consisting of normal and defective tile. The result data can be classified under the type of defect. Accuracy can be obtained from the verification of data testing. The result of verification from 23 data testing is obtained 19 correct and 3 false. So the accuracy of the system in recognizing defective tile and normal tile 83 % where the error detection 17 % as follows Correct Testing data Total testing data
Error classification defective tile
Besides that, mapping the testing process of each defect is carried out to determine the possibility of error detection of each type of defect as shown in Table 5
Figure 2. Result of Testing image
All of those phenomena happened at defective tile which more than one type on the surface of the tile. But the classification is correct for the one type of defective tile on the surface.
Computational time of testing
The computational time (self-time) of the testing process varies depending on the type of defect and the defect dimensions. In Figure 5 it can be seen that the longest time in a crack is due to the long dimension of the defect, 1.06 seconds during the defect region. For other types of defects, computing time is taken 1 second, so if the average computation time of the testing process is 0.25 seconds. | 2021-05-10T00:03:53.947Z | 2021-02-01T00:00:00.000 | {
"year": 2021,
"sha1": "58567f57c903613aa8fe931e8b3e08cd33843f07",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/1764/1/012176",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "e4ddac060c8ce37f65e41d05afe8494f00845e90",
"s2fieldsofstudy": [
"Computer Science",
"Materials Science"
],
"extfieldsofstudy": [
"Physics",
"Materials Science"
]
} |
212939725 | pes2o/s2orc | v3-fos-license | A new species of Bicurta Sheng et al. from China (Hymenoptera, Ichneumonidae, Collyriinae), a parasitoid of Stenocephus fraxini Wei (Hymenoptera, Cephidae)
A new species of the genusBicurtaSheng, Broad & Sun, 2012, is described and illustrated,B. hejunhuaisp. nov., from North and Northeast China. The new species was reared from the stem-sawflyStenocephus fraxiniWei (Hymenoptera, Cephidae), which is the first host record for the genusBicurta.
The genus Bicurta is monotypic, with only the type species B. sinica described from Jiangxi Province of China (Sheng et al. 2012). Shang et al. (2016) reported a male specimen of B. sinica from Liaoning Province in Northeast China. The biology of the genus Bicurta was not known until this study.
The aim of this study is to describe a new species of Bicurta parasitizing S. fraxini Wei.
Materials and methods
Parasitoids were reared in the laboratory from larvae of Stenocephus fraxini collected in branches of Fraxinus spp. from North and Northeast China (Inner Mongolia, Liaoning and Heilongjiang), mainly from the downtown of Shenyang City. Photographs were taken using a KEYENCE VHX-5000 Digital Microscope imaging system and processed with Photoshop CS software. Morphological terms follow Broad et al. (2018). Abbreviations used in the text are as follows: POL = the shortest distance between posterior ocelli; OD = diameter of a posterior ocellus; OOL = the shortest distance between a posterior ocellus and a compound eye. Type material of the new species is deposited in South China Agricultural University, Guangzhou, Guangdong (SCAU).
Genus Bicurta Sheng, Broad & Sun, 2012
Type species. Bicurta sinica Sheng et al., 2012. Diagnosis. This genus is distinguished from Collyria by the epicnemial carina indistinct because of sculpture on the mesopleuron; ovipositor straight and smooth (Fig. 12), lacking teeth on the ventral valve; and the fore and mid tarsal claws each having an acutely lobed tooth (Figs 10, 24) (Sheng et al. 2012), while in the other two other collyriine genera, Aubertiella and Collyria, fore and mid tarsal claws with a median tooth, rather than a lobe.
Biology. Adults of the new species emerged from larvae of Stenocephus fraxini Wei (Hymenoptera, Cephidae) from March to May 2019 in Northern China. This is the first report of a host of Bicurta, which is consistent with the known biology of Collyria, as parasitoids of stem-sawflies (Hymenoptera, Cephidae).
Based on the field work survey during 2018 to 2019, the parasitism rate of this species on S. fraxini was 59.3% on average in the downtown of Shenyang city (J.H. Yan, unpublished data Description. Holotype, female ( Fig. 1). Body length 10.0 mm, fore wing length 6.2 mm, antenna length 3.8 mm, ovipositor length 1.6 mm.
Head. Face flat (Fig. 2), 1.2× as wide as high, centrally with sparse punctures, distance between punctures of central area 1.0 to 5.0× diameter of punctures, punctures close below antennal sockets and near inner orbits; face next to inner orbit with fine granular texture. Clypeus ( Fig. 1) 2.2× as wide as high, finely and sparsely punctate, apical margin with an obtuse median tubercle, impunctate. Mandible weakly narrowed to apex, middle width of mandible 0.57× as wide as basal width of mandible, with lower tooth slightly longer than upper tooth. Labrum not exposed. Malar space short ( Fig. 4), finely wrinkled and with fine leathery texture in between, 0.33× as long as basal mandibular width. Gena (Fig. 3) evenly convergent posteriorly, finely punctulate and pubescent, 0.63× as long as eye in dorsal view. Vertex ( Fig. 3) with posterior portion finely punctulate, between lateral ocellus and eye with fine leathery texture. POL = 1.0, OD = 1.25, OOL = 1.0. Interocellar area flat with a short longitudinal groove. Frons finely punctate above antennal sockets, centrally with a weak longitudinal carina extending between antennal sockets to median ocellus, frons slightly rugose along carina sides. Antenna (Fig. 8) with 19 flagellomeres, ratio of length of basal five flagellomeres as follows: 1.42 : 1.25 : 1.17 : 1.08 : 1.0, first flagellomere 2.83× as long as its apical width, apical flagellomere 2.4× as long as its basal width, slightly shorter than fourth flagellomere (12 : 14). Occipital carina sharp and strong. Distance from hypostomal carina to mandible 1.25× longer than basal mandibular width.
Colour. Body mainly black. Head black, face with a pair of obscure yellow marks laterally just above tentorial pits; these yellow marks are very distinct in female paratypes (Fig. 20) and hardly discernible in holotype (Fig. 2). Mandible testaceous with lower margin and apical teeth black. Stipes and prementum black. Labial and maxillary palpi yellow. Antenna with scape and pedicel black, flagellum dorsally blackish brown and ventrally yellowish brown. Fore and mid legs buff with coxae black; hind leg black, apex of trochanter yellow, trochantellus blackish brown, proximal base of hind femur ventrally buff, proximal half of hind tibia ventrally dull yellow and dorsally dark brown to blackish brown, apical half of hind tibia black. Hind margins of tergites 1-7 narrowly yellow. Tegula black. Wings hyaline, with veins and pterostigma blackish brown.
Male (Fig. 18, 19, 21, 22). Body length 8.2 mm, fore wing 5.4 mm. similar to female. Differences from female as follows: antenna ventrally yellow to yellowish brown; face and clypeus (except lower margin blackish) yellow (Fig. 19), sometimes with a small blackish spot on face centrally; frons with several transverse wrinkles just above antennal sockets; first tergite centrally with two distinct carinae which extend to posterior 0.7 of tergite, posterior tips of carinae irregularly branched. Paramere apically truncate.
Etymology. The new specie is named in honour of Prof. He Junhua from Zhejiang University in recognition of his years of dedicated and conscientious performance in the study of Chinese Hymenoptera, and also for the celebration of his 90 th birthday.
Comparison. The new species is similar to the genotype, B. sinica, in its overall appearance and colour pattern. But it can be distinguished from B. sinica by the face having two obscure or distinct yellow marks (the face of B. sinica has the ventral inner orbits, clypeus and a stripe passing through the anterior tentorial pits yellow); the mandible weakly narrowed from middle toward the apex, with middle width of mandible 0.57× as wide as the basal width of mandible (B. sinica with the mandible strongly narrowed from middle toward the apex, with middle width of mandible 0.26× as wide as the basal width of mandible, measurements based on the figure of Sheng et al. (2012); the central part of the face with sparse punctures (with dense punctures in B. sinica); the mesosternum polished, with sparse punctures (B. sinica has the mesosternum densely punctate); and the fore wing vein 1cu-a usually distinctly distad of M&RS (1cu-a opposite M&RS in B. sinica).
Biology. The species was reared from the larvae of Stenocephus fraxini Wei (Hymenoptera, Cephidae) in Northern China.
Discussion
Knowledge of the biology of the subfamily Collyriinae has been limited to life history studies of just two species of Collyria (Salt, 1931;Wahl et al. 2007). Both Collyria coxator (Villers) and C. catoptron Wahl have been shown to be koinobiont endoparasitoids of Cephus (Hymenoptera: Cephidae), ovipositing in the host egg and emerging from the cocooned larva. Sheng et al. (2012) presented detailed morphological evidence that placed the genus Bicurta in the subfamily Collyriinae, despite the very different morphology of the ovipositor. Confirmation that B. hejenhuai sp. nov. is also a parasitoid of larval Cephidae, but in a different habitat (tree twigs as opposed to grass stems) suggests that all collyriines may be koinobiont endoparasitoids of larval cephid sawflies, including the poorly known Aubertiella nigricator (Aubert), which has never been reared. The fine ovipositor of Bicurta species suggests that oviposition will be into host eggs or early instar larvae, although this has not been confirmed. | 2020-01-02T21:57:49.163Z | 2019-12-30T00:00:00.000 | {
"year": 2019,
"sha1": "291d3722b0484122df435c0376f51f68e6ebf252",
"oa_license": "CCBY",
"oa_url": "https://jhr.pensoft.net/article/39570/download/pdf/",
"oa_status": "GOLD",
"pdf_src": "Unpaywall",
"pdf_hash": "291d3722b0484122df435c0376f51f68e6ebf252",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology"
]
} |
249235070 | pes2o/s2orc | v3-fos-license | Short Bones, Renal Stones, and Diagnostic Moans: Hypercalcemia in a Girl Found to Have Coffin-Lowry Syndrome
Pathogenic variants in RPS6KA3 are associated with Coffin-Lowry syndrome (CLS), an X-linked semidominant disorder characterized by intellectual disability, stimulus-induced drop attacks, distinctive facial features, progressive kyphoscoliosis, and digit anomalies in hemizygous males. Heterozygous females may also have features of CLS; however, there can be considerable phenotypic variation, often attributed to ratios of X-inactivation in various tissue types. Although skeletal anomalies and short stature are hallmarks of CLS, hypercalcemia has not been reported. Here we describe a 30-month-old girl with gross motor delays, short stature, dysmorphic features, bilateral duplicated renal collecting systems, and no family history of hypercalcemia who required multiple admissions for idiopathic hypercalcemia necessitating bisphosphonate infusions at 12.5 and 15 months of age. A maternally inherited likely-pathogenic variant in RPS6KA3 was identified by trio exome sequencing, consistent with the diagnosis of CLS in the proband and her mother. Maternal history was notable only for decreased height compared to first-degree relatives, bilateral genu valgum, and a bicornuate uterus; she was later found to also have a partially duplicated left renal collecting system. Subsequent X-inactivation studies in blood aligned with the phenotypic variation between mother and daughter. Although hypercalcemia is not a reported feature in CLS, there is evidence of interrupted osteoblast differentiation, providing a potential mechanism for hypercalcemia in this genetic condition. The hypercalcemia in this case may represent a severe presentation of an unrecognized clinical feature in CLS that resolves with age. This case further highlights the intrafamilial phenotypic variation of CLS among females, suggesting X-inactivation as the underlying mechanism, and demonstrates the value of exome sequencing in patients for whom a genetic disorder is highly suspected but not identified despite thorough evaluation.
Introduction
Coffin-Lowry syndrome (CLS) is an X-linked semidominant genetic disorder caused by pathogenic loss-of-function variants in RPS6KA3, a gene located on the X chromosome at Xp22.2 that encodes the ribosomal S6 kinase, RSK2. 1 In hemizygous males, CLS is characterized by intellectual disability, short stature, progressive kyphoscoliosis, tapered fingers, stimulus-induced drop episodes, and distinctive facial features. Facial features include a prominent forehead with supraorbital ridges, widely spaced eyes with downslanted palpebral fissures, depressed nasal bridge, thick nasal alae, wide mouth, and an everted vermilion of lower lip. 2 The true prevalence of CLS is not known; however, the estimated incidence is between 1:50 000 to 1:100 000 with approximately 70% to 80% of affected individuals representing sporadic cases secondary to a de novo pathogenic variant in RPS6KA3. 1,3 The remaining 20% to 30% of cases are maternally inherited, as males with CLS do not typically reproduce. 2 There is considerable phenotypic variation among females with CLS who are heterozygous for a pathogenic variant in RPS6KA3, ranging from normal appearance and intelligence with short stature +/− digit abnormalities, to characteristic facial features with moderate developmental and intellectual delay/disability. 1,2,4,5 The wide range of clinical severity reported among females with CLS is thought to be secondary to ratios of X-inactivation in various tissue types. 2,4 Thus, in inherited cases of CLS, a mildly affected woman may have more severely affected daughters in addition to having a 50% chance of conceiving a son with CLS. Of note, a study by Simensen et al 5 demonstrated that females heterozygous for a pathogenic variant in RPS6KA3 are more likely to have skewed X-inactivation patterns in blood (favorable skewing) and decreased IQ compared with female family members without the pathogenic variant; there was no significant correlation between IQ and X-inactivation among heterozygous females.
Despite the well-observed impact of CLS on a variety of skeletal tissues in affected males and females, including the axial skeleton (progressive kyphoscoliosis and ligamenta flava calcification), long bones (short stature with disproportionately short lower limbs), 2,6-8 and distal phalanges (tapered fingers), the underlying pathophysiology of this phenotype in CLS remains unclear. Here, we describe a 30-month-old girl with idiopathic hypercalcemia necessitating multiple hospital admissions and bisphosphonate (pamidronate and zoledronate) infusions at 12.5 and 15 months of age who was found to have CLS via trio exome sequencing. 9 Although her calcium levels normalized around 22 months of age and have remained stable, she continues to have secondary nephrocalcinosis with normal renal function. Hypercalcemia is not a reported feature in individuals with CLS; however, this individual's presentation and calcium dysregulation may provide insight into the underlying mechanism of skeletal anomalies associated with CLS. 9
Case Presentation
A 12-month-old girl with history of mild motor delay was admitted to the pediatric intensive care unit for electrolyte derangements in the setting of 2 weeks of fatigue and decreased oral intake, and a 1-day history of persistent nausea and vomiting. Comprehensive metabolic panel (Table 1) was most notable for profound hypercalcemia (14.7 mg/dL), hyponatremia (129 mmol/L), hyperkalemia (6.2 mmol/L), hypochloremia (89 mmol/L), decreased bicarbonate (16 mmol/L), anion gap of 24, elevated BUN (48 mg/dL), and elevated creatinine (0.96 mg/dL). Additionally, hyperuricemia (10.3 mg/dL) was noted. Renal ultrasound on admission was significant for bilateral grade 3-4 medullary calcinosis. Nephrology evaluation, pertinent for an appropriately and acutely elevated urine Ca/Cr ratio which later resolved, provided evidence against a renal etiology with no evidence of primary renal disease. Endocrinology evaluation at the time of hypercalcemia was notable for appropriately suppressed parathyroid hormone (3.3 pg/mL; Table 1) with normal albumin, phosphorus, alkaline phosphatase, magnesium, thyroid stimulating hormone, free-T4, Vitamin A, 25-hydroxy and 1,25-dihydroxy vitamin D levels. Inappropriately elevated parathyroid hormone-related peptide (PTHrP, 9.7 pmol/L; Table 1) was noted without identifiable source despite laboratory studies and extensive imaging in consultation with Oncology; lactate dehydrogenase, alpha-fetoprotein, betahuman chorionic gonadotropin, and C-reactive protein were all within normal limits. Abdominal magnetic resonance imaging (MRI) revealed bilateral duplicated renal collecting systems, redemonstration of bilateral nephrocalcinosis, and no suspicious abdominal or pelvic mass to suggest malignancy. Additionally, brain and neck MRI were normal. Testing for lead toxicity and human immunodeficiency virus (HIV) were both negative. The proband was treated with intravenous fluids with resolution of her hypercalcemia (calcium 10.3 mg/dL) and other electrolyte abnormalities, and discharged on day 11 of admission. Ten days later, the proband was admitted a second time for profound hypercalcemia (14.3 mg/dL), at which time she was treated unsuccessfully with calcitonin (4 doses of 4 units/kg q12 hours), followed by intravenous bisphosphonate therapy (1 dose 0.5 mg/kg of pamidronate) with good response ( Figure 2). The proband was discharged on day 7 of admission with a serum calcium level of 8.5 mg/dL.
Medical Genetics was consulted on both admissions due to concern for an underlying genetic etiology of her hypercalcemia. Perinatal history was notable for high-risk noninvasive prenatal screening for Turner syndrome which was further evaluated by karyotype via amniocentesis and revealed a normal female karyotype. The proband was born at 39 weeks gestation to a 32-year-old G1P0 woman via cesarean section for breech presentation. Pregnancy was complicated by fetal growth restriction thought to be secondary to the mother's bicornuate uterus. Birth weight was 2390 g (fourth percentile) with a birth length of 46 cm (seventh percentile) and head circumference of 32.5 cm (seventh percentile). The proband's mother's medical history was notable for decreased height (61 inches, seventeenth percentile for US adult women) compared to first-degree relatives (mid-parental height 67 inches), bilateral genu valgum, and a bicornuate uterus; there was no maternal history of hypercalcemia.
At 12.5 months of age, physical exam was notable for length of 68 cm (<third percentile, Z-score = −2.05) with relative macrocephaly (44.5 cm, 33rd percentile, Z-score = −0.45) and frontal bossing, depressed nasal bridge with anteverted nares and bulbous nasal tip, and everted lower vermillion border (Figure 1). Initial Medical Genetics evaluation included a skeletal survey and microarray, both of which were reported as normal. A custom calcium homeostasis gene panel with 37 genes was ordered and nondiagnostic. After multiple failed attempts in obtaining insurance authorization for additional genetic testing, trio exome sequencing was funded by the nonprofit organization, Little Zebra Fund. 10 Exome sequencing revealed a maternally inherited likely pathogenic variant (c.325+1G>T) in RPS6KA3, located within intron 4 at a canonical splice site and predicted to result in a null allele. This finding is consistent with the diagnosis of CLS in our proband, as well as her mother; no other variants were reported. Subsequent X-inactivation studies 11 performed on blood demonstrated complete favorable skewing (100:0) in the mother and random X-inactivation in the proband (57:43), consistent with the phenotypic variation between mother and proband. Familial variant testing in the proband's maternal grandmother and maternal half-aunt were negative, suggesting the pathogenic variant to be de novo in the proband's mother.
After discharge from her second admission, the proband's calcium levels were monitored weekly by Endocrinology and continued to be elevated but stable (range 10.5-12.1 mg/dL). At 15 months of age, the proband was admitted for a third time for severe hypercalcemia (12.5 mg/dL; Table 1) and treated again with intravenous bisphosphonate therapy (1 dose of 0.0125 mg/kg of zoledronate, in hope of a longer duration of effect). Her calcium level on 5 days post-zoledronate infusion was 10.9 mg/dL ( Figure 2). Monitoring of the proband's calcium level at least monthly from age 15 to 22 months demonstrated consistent levels below 11.5 mg/dL (range 10.9-11.4 mg/dL). Monitoring of her calcium level at least every 3 months from age 22 to 30 months demonstrated consistent levels below 11 mg/dL (range 10.4-10.8 mg/dL).
At Medical Genetics outpatient follow-up visits at 21 and 30 months of age, many of the facial features previously noted had become more pronounced and tapered fingers were also noted (Figure 1). At 30 months of age, growth parameters included length of 85 cm (fifth percentile), weight of 11.2 kg (eighth percentile), and head circumference of 48 cm (45th percentile). Parental reports and evaluations by Developmental and Behavior Pediatrics suggest evidence of global delays that are improving with therapies including occupational therapy for gross and fine motor (unsteady gait with frequent falls and now able to manipulate small items at 30 months), speech-language therapy for delayed speech (>100 words and able to form 2-word sentences at 30 months), and poor feeding by mouth. The proband continues to gain new skills and is socially engaged. Follow-up evaluation with Nephrology continues to provide evidence against a renal etiology for her prior hypercalcemia and has demonstrated stable bilateral nephrocalcinosis with normal kidney function; she is being monitored for hypertension and proteinuria. She continues to follow with Endocrinology for monitoring of calcium levels. Overall, the proband's clinical features are consistent with her diagnosis of CLS with the exception of her history of profound hypercalcemia. The cause and/or mechanism of her hypercalcemia are unclear despite thorough evaluation by Endocrinology, Nephrology, Oncology, and Medical Genetics.
Discussion
Although various degrees of cognitive disability, short stature, skeletal anomalies, and facial features have been reported in females with CLS, hypercalcemia has not been reported in affected females or males. The underlying etiology of the proband's hypercalcemia remains unclear despite thorough evaluation by multiple subspecialists. The persistent mild elevation of PTHrP may be contributing to her unexplained hypercalcemia; however, her PTHrP levels have remained elevated while calcium levels have decreased over time. The cause of her elevated PTHrP remains unclear despite thorough evaluation by Oncology and close followup with Endocrinology; no additional imaging has been performed in the interim. Given our proband's diagnosis of CLS and considerable clinical, laboratory, radiological, and genetic evidence against another cause and/or second genetic condition, we hypothesize at this time that her transient, yet profound, hypercalcemia of early childhood is related to her diagnosis of CLS. In 2004, Yang et al 12 proposed that lack of phosphorylation of the transcription factor, ATF4, by inactive RKS2 (due to loss-of-function variants in RPS6KA3) may interrupt the normal regulatory role of ATF4 in osteoblast differentiation, accounting for some of the skeletal features seen in CLS. This hypothesis is further supported by the Rsk2-null mouse model created and characterized by Marques Pereira et al with phenotypic features including mild reduction of bone mass, mild teeth anomalies, and development of progressive osteopenia. These findings were reportedly due to impaired osteoblast function with the lack of phosphorylation of the transcription factor ATF4 by RSK2 found to be a cause of the skeletal abnormalities. 1 The features seen in the Rsk2-null mouse model appear to be consistent with the delayed bone development seen on radiological studies of individuals with CLS, which also include cranial hyperostosis, ligamental flava calcification, and tufting of the distal phalanges. 1,6,13 Whether or not these radiologic features seen in individuals with CLS are associated with transient or continuous calcium dysregulation is unclear at this time.
Both the proband and her mother were diagnosed with CLS through trio exome sequencing, and except for the proband's hypercalcemia, the clinical features of both individuals are consistent with the diagnosis. The likely pathogenic variant in RPS6KA3 in the proband was maternally inherited, highlighting the intrafamilial phenotypic variation of CLS among heterozygous females. The results of X-inactivation studies were consistent the discordant phenotype between mother and daughter, supporting X-inactivation as the underlying determinant of phenotype severity in females with heterozygous pathogenic variants in RPS6KA3. Interestingly, our proband's mother was also recently found to have a partially duplicated left renal collecting system upon evaluation for a kidney stone. Renal anomalies are not a commonly reported feature of CLS, with only 1 individual reported to have unilateral renal agenesis. 6,14 It is unclear at this time if the finding of a duplicated renal collecting system in both the proband and her mother is related to CLS, hereditary, or coincidental, but is unlikely to have contributed to the finding of hypercalcemia in the proband given evidence against a renal etiology.
After extensive multidisciplinary clinical evaluation and continued absence of a unifying diagnosis to explain our proband's constellation of features, trio exome sequencing was warranted to further evaluate for an etiology of her hypercalcemia for which medical intervention may be beneficial. Although it remains unclear if the finding of CLS explains her hypercalcemia, the result of trio exome sequencing in this case provided 2 members of this family a clinically actionable diagnosis with associated medical management, screening, and surveillance guidelines that would not have been provided to the family without this genetic testing result. In addition, this testing result has provided multiple family members with valuable reproductive information. A second medical or genetic condition is possible, although exome sequencing did not report any other variants. Exome reanalysis will be of value in the future as the field of Medical Genetics continues to expand and our understanding of human disease improves.
To conclude, our proband was found to have hypercalcemia after presenting acutely with symptoms of vomiting, dehydration, and failure to thrive. To our knowledge, there is no maternal history of hypercalcemia in infancy or childhood, and recent calcium levels in the proband's mother were within normal limits. It is possible that the proband represents a severe presentation of an unrecognized phenotype that resolves with age, and likely prior to most patients' diagnosis. As exome sequencing and perinatal genetic testing become more routinely offered to individuals with overlapping features or a history of CLS, respectively, more individuals with CLS will be diagnosed in infancy. Earlier diagnosis and phenotyping of individuals with CLS will not only aid in the medical management of infants and children with CLS but will also help to clarify if hypercalcemia, as seen in this case, represents an expansion of the phenotype or a truly unrelated feature. | 2022-06-02T06:22:53.429Z | 2022-01-01T00:00:00.000 | {
"year": 2022,
"sha1": "a32fa859c02b3b7e8707722c1492bce019ad697b",
"oa_license": "CCBYNC",
"oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/23247096221101844",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "2c246978ced079a46c19b224ab8720ec3d2a8b66",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
219468602 | pes2o/s2orc | v3-fos-license | Age, sexuality and hegemonic masculinity: Exploring older gay men’s masculinity practices at work
Correspondence Mustafa Ozturk, School of Business and Management, Queen Mary University of London, London E1 4NS, UK Email: m.ozturk@qmul.ac.uk This article examines how older gay men practice masculinity in heteronormative organizational settings. Our analysis of in-depth interview data yields two key masculinity practices: maintaining heteronormativity and embodying change. Older gay men’s masculinity practices that conform to the ideals of hegemonic masculinity have the effect of maintaining heteronormativity. Embodying change refers to older gay men’s masculinity practices that leverage accumulated life experiences to negotiate heteronormativity for change, although such agency is constrained by individuals’ material and symbolic commitments to heteronormativity. By delineating these two clusters of practices and exploring the dynamic relationality between individual action and organizational order from a practice-based perspective, we extend the conceptual scope of hegemonic masculinity. Furthermore, by investigating how older gay men navigate ageing and sexuality in organizations, we show the constraining and enabling effects of ageing as a social and embodied process on gay men’s masculinity practices.
This article examines how older gay men practice masculinity in heteronormative organizational settings. Our analysis of in-depth interview data yields two key masculinity practices: maintaining heteronormativity and embodying change.
Older gay men's masculinity practices that conform to the ideals of hegemonic masculinity have the effect of maintaining heteronormativity. Embodying change refers to older gay men's masculinity practices that leverage accumulated life experiences to negotiate heteronormativity for change, although such agency is constrained by individuals' material and symbolic commitments to heteronormativity.
By delineating these two clusters of practices and exploring the dynamic relationality between individual action and organizational order from a practice-based perspective, we extend the conceptual scope of hegemonic masculinity. Furthermore, by investigating how older gay men navigate ageing and sexuality in organizations, we show the constraining and enabling effects of ageing as a social and embodied process on gay men's masculinity practices.
K E Y W O R D S
age, body, hegemonic masculinity, organization masculinities, practice, sexuality
| INTRODUCTION
This article addresses the paucity of scholarly knowledge on how age and sexuality shape how individuals practice masculinity in organizations. Currently, age is largely absent from scholarly accounts of sexuality at work, even when organizational research on lesbian, gay, bisexual and trans (LGBT) workers examines how sexuality interconnects with other identity categories such as class, gender and race (e.g., Ragins, Cornwell, & Miller, 2003;Rumens, 2010).
The shortage of research that explores the relationship between age and sexuality at work is concerning because age is understood to be a significant site of social control (Gilleard & Higgs, 2014). For example, age-related norms can require workers to conform to heteronormative expectations of monogamous partnership and child rearing (Riach, Rumens, & Tyler, 2014), an outcome of which may be the marginalization of non-normative ways of living age in and outside of work. Neglecting the organizational salience of age and sexuality can sustain organizational heteronormativity by ignoring older LGBT workers' embodied experiences of sexuality and ageing in the workplace Rumens, 2018).
Although there is a mature literature that analyses how organizations are gendered, the question of how masculinities are related to age and sexuality remains largely unaddressed (e.g., Barrett, 1996;Collinson & Hearn, 1994;Kerfoot & Knights, 1993;Knights & Tullberg, 2014). Some research focuses on the importance of ageing in the context of work and organization. For example, Riach and Cutcher (2014) find that ageing is an accumulation process during which workers manage the experience of growing older to elicit productive career outcomes. Similarly, Foweraker and Cutcher (2015) show that ageing poses a challenge to masculinity, which older male workers counter by drawing on successful ageing narratives and by distancing themselves from hegemonic masculinity. These studies indicate how ageing is a dynamic social process. While the organization masculinities literature is age-blind (Riach & Cutcher, 2014), the neglect of sexuality in organization masculinities scholarship is even more profound (Rumens, 2014). Research claims older men and gay men tend to practice masculinities from a subordinated position (Slevin & Linneman, 2010;Yeung, Stombler, & Wharton, 2006), but this is by no means the only way that they may practice masculinities. Thus, exploring older gay men's masculinity practices in the workplace can provide welcome insights into how age, gender and sexuality are interconnected and contextually contingent.
In this article, we mobilize the concept of hegemonic masculinity (Connell, 1995;Connell & Messerschmidt, 2005) to explore the question: how do older gay men practice masculinity in organizations? Our research focuses on older gay men specifically, rather than older LGBT workers generally, because gay men's sexuality has long connoted and continues to be related to sexual deviance (e.g., promiscuity, paedophilia, etc.), and gay men pose a disruptive threat to the heteronormative social order (Eribon, 2004). Older gay men still face stereotypes that frame them as sexual predators and perverts (Jones & Pugh, 2005), which may mean they are especially vigilant about how they manage the relationship between sexuality, age and masculinity (Rumens, 2018). As well, conceptually, the reproduction of hegemonic masculinity hinges on the subordination of some men, in particular gay men, by other men through the policing of heterosexuality (Connell & Messerschmidt, 2005).
Understanding how older gay men negotiate hegemonic masculinity in heteronormative organizational settings holds significant theoretical and empirical value. In particular, our study highlights the conceptual utility of hegemonic masculinity in examining the dynamic relationality between the agency of older gay men and organizational heteronormativity, where older gay men's practices of masculinity are the key drivers of continuity and change regarding inequalities of age, gender and sexuality. Empirically, our analysis yields two clusters of masculinity practices: maintaining heteronormativity and embodying change. In the data, maintaining heteronormativity emerges as masculinity practices that align with hegemonic masculinity. When maintaining heteronormativity, older gay men can protect their status and capacity to exercise power as managers and professionals. Additionally, our data also points to masculinity practices marked by the embodiment of change that signal older gay men's agency to negotiate heteronormativity to argue for change. Nonetheless, embodying change represents a constrained agency because of older gay men's existing material and symbolic commitments to organizational heteronormativity.
In our study, ageing operates as a social and embodied process, which exerts enabling and constraining effects on gay men's masculinity practices. On the one hand, ageing is linked to how older gay men manage sexuality in ways that render them unthreatening to their heterosexual male colleagues, affording them opportunities to practice masculinities that challenge their organizations. Indeed, older gay men draw on practical competencies accumulated from their history of change-seeking activity. On the other hand, ageing can make older gay men vulnerable to agespecific heteronormative stereotypes, but our data reveals how they can counter these through specific masculinity practices. As well, we show that ageing can generate anxieties about conforming to the ideals of hegemonic masculinity, which can lead older gay men to practice an embodied managerial masculinity that maintains gender and sexuality hierarchies at work.
The rest of the article is structured as follows. First, we unpack the concept of hegemonic masculinity. Then, we review the research that has examined the dynamics between sexuality and age before outlining the study's methodology. The findings section presents older gay men's practices of masculinity, organized around the themes of 'maintaining heteronormativity' and 'embodying change'. Finally, we outline the contributions this article makes to extant research before concluding.
| THEORETICAL FRAMEWORK: HEGEMONIC MASCULINITY
The concept of hegemonic masculinity hinges on the idea that gender is a social construction (Connell, 1987(Connell, , 1995. Rejecting an essentialist view of gender is important in understanding masculinity as a practice, not a set of sex roles or traits, but embodied performativity. Connell (1995) defines practice as body-reflexive social actiona multifaceted phenomenon incorporating both material and symbolic elements, where the body is at once the object and subject of practice, and where practice has a constitutive effect on the social order. The centrality of practice to Connell's (1987Connell's ( , 1995 hegemonic masculinity highlights agency as a significant component in theorizing masculinity. Connell (1987) contends that practising masculinity, as exemplified by socially idealized notions of being a 'man', generates a hierarchy between groups of men, and between men and women across various social settings. Thus, we know hegemonic masculinity when we encounter it because it constitutes an ideal-type masculinity, against which all other masculinities are evaluated. Furthermore, hegemonic masculinity takes on a social significance that goes beyond any specific practice of masculinity, because it serves as the cultural logic that legitimates and sustains the subordination of particular groups of people. A form of masculinity (or femininity) arises from practising what is agreed socially to be masculine (or feminine) in a particular context, which then facilitates men and women to occupy a certain recognizable position in a hierarchical gender system (Connell, 1987). Both men and women are able to practice masculinities and femininities, but women and some men cannot fulfil the ideals of hegemonic masculinity in their practices of masculinity. Additionally, forms of femininity and their practitioners are always located in a relational and subordinate position. The long-term survival of the hierarchical gender system stems from the consent and compliance of all agents operating under hegemony. Although practising masculinity can be an unreflective process, individuals can enact their social identities strategically, as they need to read the social world in which they operate to survive and thrive (Connell, 1995). While some men may internalize hegemonic masculinity more unwittingly and unwillingly, other men may interiorize it strategically and valorize it as the most legitimate form of masculinity and align their actions in order to fit in socially (Bird, 1996).
In our reading of hegemonic masculinity, agency operates in dynamic relationality with structure, which parallels Reed's (2003) relationist account of the agency-structure interplay. According to Reed (2003), individuals have significant capacity for creative action, which can effectively reshape social orders in which they are situated. However, creative agency is constrained by the relationships and effects of the social orders, which individuals have to negotiate. Both agency and structure have analytically distinct influences on the constitution of social reality (Reed, 1985).
The very ingenuity of individual action is what facilitates the plurality of the concept of masculinity. Masculinity can come in various instantiations as specific individuals enact patterns of practices in particular social settings. The strong and enduring norms, rules and scripts of conduct in a social setting can constrain individuals' change-oriented actions. Still, agentic breakthroughs are possible when individuals with knowledge of the social order in which they are acculturated operate reflexively and creatively to contest that social order from within.
In making sense of agency in relation to hegemonic masculinity, we grappled with Connell's counterhegemonic view of gay men's masculinities. For Connell (1992), gay men's masculinity is relegated to a subordinate position vis-à-vis hegemonic masculinity. Patriarchal culture views gay men as lacking masculinity and even when they do not appear feminine they are denigrated as being feminine (Connell, 1995). By embodying subordinated masculinities, gay men can experience disadvantage when compared with heterosexual men (Carrigan, Connell, & Lee, 1985;Connell, 1992). Tangibly, this is exemplified in incidents of homophobic bullying in schools (Plummer, 1995), employment discrimination (Drydakis, 2015;Ozturk, 2011;Ozturk & Rumens, 2015) and hate crimes against gay men (Herek, 2009). At the same time, feminist literature implicates gay men in upholding unequal gender norms and gaining a patriarchal dividend through forms of sexism (Ward, 2000). Messner (1997) views gay men as willing collaborators with patriarchy because gay male culture largely trades on, and draws from, the ideals of hegemonic masculinity. Similarly, Yeung et al. (2006) note gay men's ambivalent relationship to hegemonic masculinity. In their study of gay male fraternities, members could celebrate being gay, but also closed ranks to exclude women from the fraternal order by referencing gender differences.
We agree with Connell that in one sense gay men are counterhegemonic, because their practice of sexuality can destabilize the institution of heterosexuality or, in the context of our study, what we refer to as heteronormativity, a normative regime that privileges heterosexuality. Nevertheless, we submit that gay men can also subscribe to hegemonic masculinity (Whitehead, 1999), and ageing introduces further tensions into this picture. Research shows that older men are routinely viewed as invisible and asexual, reshaping expectations surrounding their masculinity practices (Thompson, 2006). Older bodies often connote diminishing vigour, which in turn implies a loss of ground in negotiating masculinity (Meadows & Davidson, 2006). As otherwise privileged men get older, they come to occupy a subordinate position vis-à-vis hegemonic masculinity (Connell, 1995). Being an older gay man is sometimes seen as a double de-masculinization, although the social effects of gay men's ageing are complex and context-dependent (Slevin & Linneman, 2010). Yet, older gay men may have the strategic capacity to negotiate the negative implications of ageing, having weathered exclusionary pressures surrounding masculinity all their lives (Wahler & Gabbay, 1997).
Older gay men have agency and we argue understanding their masculinity practices promises to generate a fuller account of continuity and change of organizational heteronormativity.
Despite the achievements of the organization masculinities literature to 'name men as men' (Collinson & Hearn, 1994), extant scholarship has tended to overlook LGBT sexualities (Rumens, 2014) and, with notable exceptions (e.g., Foweraker & Cutcher, 2015;Riach & Cutcher, 2014), ageing masculinities are also neglected. Although it is crucial to interrogate the advantages dominant masculinities can carry in organizations, hegemonic masculinity scholarship risks privileging white, heterosexual, middle-class and middle-aged men, owing to its predominant focus on this group (Ashcraft & Flores, 2003). In this light, studying older gay men's masculinity practices is a corrective step against this tendency. Men in management roles tend to enjoy considerable agency in organizations (Knights & Tullberg, 2014), although this cannot be presumed when they are older and identify or are believed to be gay. Studying how older gay men may seek to negotiate their agency through particular practices of masculinity can both expand the range of organization masculinities we currently understand, and sustain the relevance of hegemonic masculinity in research on organizations, sexuality and age.
| SEXUALITY, AGE AND MASCULINITY AT WORK
Despite the growth of 'gay-friendly' organizations, LGBT employees continue to experience the harmful effects of working in heteronormative organizations (Giuffre, Dellinger, & Williams, 2008;Priola, Lasio, De Simone, & Serri, 2014). From one perspective, organizations are increasingly developing LGBT-friendly initiatives such as supportive workplace policies to address the needs of LGBT workers (Cook & Glass, 2016). From another perspective, research reveals that heteronormativity persists and has a negative impact on LGBT workers (Benozzo, Pizzorno, Bell, & Koro-Ljungberg, 2015;Rennstam & Sullivan, 2018). In recruitment, lesbian and gay job applicants can receive fewer interview invites, lower pay deals and exclusion remains particularly strong in male-dominated occupations that require competencies associated with hegemonic masculinity (Drydakis, 2015). On the job, bullying and harassment of gay men endures (Hoel, Lewis, & Einarsdóttir, 2014), while gay men who identify as or are perceived to be overly feminine can experience homophobia (Rumens & Broomfield, 2012). Understanding gay men as failures in masculinity can jeopardize whether they are seen as 'professional' (Rumens & Kerfoot, 2009) and limit the types of workplace relationships they are able to develop (Rumens, 2010).
Research shows that in those work contexts where hegemonic masculinity is valorized, openly gay men are able to practice masculinities that retrench hegemonic masculinity (Ozturk & Rumens, 2014;Rumens & Kerfoot, 2009;Ward & Winstanley, 2006). Additionally, anxiety and fear of deviating from organizationally desirable norms of masculinity continues to trouble gay men at work (Stenger & Roulet, 2018). Some gay men feel obliged to accommodate masculinity norms by understating their sexuality through a range of identity management strategies such as passing as straight, avoiding discussions about their sexuality and non-disclosure (Woods & Lucas, 1993). Homophobia, which is a key feature of hegemonic masculinity, shapes how organization masculinities are enacted (Barrett, 1996), making gay men's relationship with masculinity problematic.
As noted above, the interplay between gay men's sexualities, age and masculinity has largely been overlooked by organization studies scholars (Harley & Teaster, 2016), although recent research has started to explore LGBT sexualities and ageing. show how age can enable and constrain the performative constitution of LGBT subject positions that facilitate and foreclose opportunities for LGBT workers to be recognized as viable organizational subjects. Similarly, Rumens (2018) examines how age shapes older gay men's capacity to negotiate inclusive masculinities characterized by more caring and supportive qualities (Anderson, 2009). Yet, in practising such masculinities, some older gay men struggled to avoid being stereotyped as too feminine (e.g., as 'old gay queens') and thus being seen as 'unfit' and 'unprofessional' for work. Missing from this strand of research is how older gay men can practice masculinities in ways that enable them to survive and even challenge organizational heteronormativity. Therefore, in this research, we address the question of how older gay men practice masculinities in organizations.
| RESEARCH METHODOLOGY
The data underpinning this research came from 12 in-depth interviews. The choice for the limited sample size was intentional, in order to cultivate a deep and intensive engagement with the participants (Crouch & McKenzie, 2006). Small-N interview research is a well-accepted feature of LGBT organizational scholarship, not least because these minority groups are difficult to access (e.g., Ozturk & Rumens, 2014;. By limiting the sample size, we were able to spend more time probing interviewees to generate in-depth data and reach data saturation. We set the eligibility criteria as management-level older gay men involved in professional work, such as accounting, banking and finance, law and consulting. These criteria yielded not only the most useful data for our purposes, but also facilitated a more vivid, and fuller, understanding of a specific population of interest. We set the minimum participant age at 50, a common cut-off point used in qualitative studies involving older workers (e.g., Moore, 2009;Tomlinson & Colgan, 2014). The eventual age range of the participants was 51-65. We used author networks purposively, coupled with snowball sampling strategya common feature of explorative research involving stigmatized, hard-to-reach populations (Creed, DeJordy, & Lok, 2010). We recruited older gay men who shared commonalities in terms of organizational seniority and professional identity to be able to make meaningful observations of this group's particular practices of masculinity. Our sampling strategy had the unintended consequence of generating an all-white sample of older gay men. We acknowledge that the masculinity practices we analyse may have different saliences for older gay men who hail from different racial and ethnic backgrounds. Older gay men vary along numerous social identities (King, 2016;Spedale, 2019), but we prioritized the requirement for study participants to have similar characteristics, as exemplified by recent studies of hegemonic masculinity that set a clear empirical scope to ensure the specificity of social agents and their context (e.g., Knights & Tullberg, 2014;Sang, Dainty, & Ison, 2014).
Through in-depth interviewing, we constructed a non-threatening conversational space in which to converse with participants (Johnson, 2001). This mode of data collection is useful in explorative research, where the topic area and the research population are understudied (Kvale & Brinkmann, 2009). We developed an interview schedule based on our review of the literature, but we used this sensitively in order to ensure that interviewees had a participative role in steering the dialogue. The interviews included a broad range of questions that probed participants' experiences as older gay men in professional work contexts and how they variously practised masculinities at work.
The interviews also incorporated elements of life history, in common with other studies of hegemonic masculinity (Connell & Messerschmidt, 2005). Conducted in locations chosen by the participants, the interviews ranged from one to two hours in length and were digitally recorded and fully transcribed. We assigned participants pseudonyms to safeguard their anonymity.
At the outset, as a middle-aged, middle-class, gay male academic, the first author anticipated being relatively comfortable interviewing other gay men. Yet, once the interviews began, he experienced feelings of awkwardness and inadequacy when interacting with some of the intervieweespowerfully placed corporate men who spoke with authority, which sometimes made the interviewer feel out of place. In post-interview reflections, it became clear that the first author initially felt daunted and under pressure to manage his own behaviour to leave a good impression while interacting with the interviewees. This included adopting the corporate vocabulary of the interviewees (e.g., best practice, buy-in, scalable, win-win, etc.) and embodying the corporate appearance of hegemonic masculinity some interviewees embodied also when being interviewed (e.g., wearing a blazer, shirt and tie). Such adjustments speak to the potency of the norms of the social world inhabited by the interviewees. Even as a transient interlocutor, the first author found it difficult not to engage with the norms of organizational masculinities.
As critical scholars, we interrogate what study participants say instead of simply taking their statements at face value. While analysing the data we found ourselves at times becoming overcritical of the men who seemed to uphold hegemonic masculinity or who we thought only challenged it in the margins. Our sustained discussions alerted us to our own pre-conceptions, which led us to judge the participants harshly at first. On deeper reflection, we realized that there was pain and suffering interwoven in some participants' accounts, observable in how they negotiated hegemonic masculinity. This sensitized us to the need to perform critique from a position of care and compassion.
We undertook a thematic analysis to interpret the interview texts (Boyatzis, 1998). Thematic analysis requires superimposing a coding structure on the participants' words. From the outset, we shared Harding, Ford, and Lee's (2017) concerns regarding the difficulty of transforming complex, textured participant accounts into systematically broken up categories, and this sensitivity guided many of the subsequent choices we made. We took the decision to opt for manual coding, which helped us guard against the detaching and depersonalizing effects of computerized sorting, labelling and enumeration (St. Pierre & Jackson, 2014). Understanding masculinity practices hinges upon the identification of ambivalences, contradictions and shades of meaning within the interviews, which software packages could fail to capture because of their quantizing effect when analysing text (Basit, 2003).
We started the analysis process by sensitizing ourselves to the nuances in the interview content through repeated readings, during which we manually assigned initial codes. We mutually interrogated the codes we developed by trying to assess the similarities and differences in our approaches to the data, and by carefully considering the impact of our own emotions, preconceptions and identities on the coding process (Alvesson & Sköldberg, 2018).
Our experiences with the coding of the interviews yielded frank and robust discussions involving the suitability of the codes we developed. These discussions helped us merge overlapping codes, produce additional codes where necessary, and enabled us to revise and delete some initial codes. Where we had coding disagreements, we resolved them by iteratively going between theory and data to refine our analysis (Alvesson & Sköldberg, 2018).
| OLDER GAY MEN'S PRACTICES OF MASCULINITY
In this section, we present the study findings on older gay men's practices of masculinity in organizations around two major themes: maintaining heteronormativity and embodying change. The social and embodied process of ageing variously appears as a constraint or opportunity across both clusters of practice.
| Maintaining heteronormativity
In the interviews, some participants' practices of masculinity maintained heteronormativity, in particular its restrictive norms about age, gender and sexuality. In general, our participants routinely recognized themselves at risk of failure in meeting gendered expectations about being managers. Anxieties about the negative social implications of ageing as gay men appeared to reinforce their vulnerabilities as managers. Accordingly, some participants, such as William, a managing director, emphasized practising hegemonic masculinity in order to fit into an aggressive and competitive work environment: We are a leading investment bank, so I think there's just no question, we just need to occupy that special zone, own the leading edge in everything we do. So you can't escape the power, the aggression, the competition, the intensity. I live it, I mean, every day there's something to test you … I've got to fit in with what I do … lead a testosterone-driven place. (William, banking and finance, 62) Workplace expectations surrounding the masculine body and its practical potential (e.g., explicit referencing of testosterone, as above) are linked to the demands of power play and competition in William's workplace. Older gay men often judged their success as managers by virtue of their embodied competence to lead effectively in a highly masculinized work environment. They managed work and people in line with the ideals of hegemonic masculinity, which helped them exercise managerial power.
I'm close to retirement, so people may very well think I don't care as much anymore … Obviously, I've got to project a certain dog-eat-dog masculinity in my behaviour, that's a given. I still mean business. (William, banking and finance, 62)
Some of the participants consciously sought to burnish their management credentials by outperforming others to achieve a sense of managerial supremacy. These participants managed negative co-worker or client perceptions surrounding their age by adopting a fiercely competitive management style that confirmed their embodied capacity for confronting challenges. In this way, participants connected managerial legitimacy to contributing to the bottom line proactively and securing new business victories. Some of the participants pursued status in their organizations by seeking recognition from more privileged heterosexual men. They sought to achieve influence within organizational power hierarchies by strategizing to be more like the 'high-flyers' . A partner in an accountancy practice explained: It's not about being alpha male or type A personality, it's something a bit more nuanced than that … it's how I compete, it's knowing where to push and where to let go, just understanding the puzzle, how the pieces fit really, understanding who and what matters in the office … well, also about winning big …, at least some of the time to remain in the magic circle. (Michael,accounting,61) Participants appeared to hone in on acts of winning as a means of achieving pre-eminence within a select group of privileged men (e.g., striking deals, completing high-profile projects, securing key client accounts). Older gay men could exercise power and attain prestige by conforming to the ideals of hegemonic masculinity, although ageing continually created anxieties and insecurities about the long-term sustainability of their position in the organizational hierarchy.
Ageing is a black mark against you, and it's a much bigger challenge if you're gay. Having to prove you can do this job is like quicksand, it's quite scary to imagine what would happen if I'm too old to pull in new busi-
ness … too old to remain at the top of my game. (Michael,accounting,61) Hegemonic masculinity triggered anxieties about measuring up, which sometimes led participants to intensify their focus on business development and commercial relevance to demonstrate desirable managerial masculinity. While some suggested that their struggle against ageing was a 'losing game', they continued to signal an implacable commitment to sustaining their workplace privilege. In particular, they perpetuated an organizational culture that emphasized competition and winning, which excluded women and some gay men.
Performing embodied managerial masculinity hinged on endorsing gender and sexuality hierarchies. Some of the participants tended to marginalize gay men's practices of femininity (i.e., calibration of gait, voice and behaviours in ways that are normatively associated with women). They considered feminine gay men as organizational outsiders and characterized their gay masculinities as a hindrance to managerial effectiveness. As Thomas, a relationship manager, worried: For most participants, it was okay to be older and gay so long as one was not 'acting gay', which the participants dismissed as 'unprofessional' (Rumens & Kerfoot, 2009). These participants justified their masculinity practices based on differences in occupational norms and generational preferences. A supply chain consultant employed by a business and technology consulting firm rationalized:
I've seen gay men wearing makeup in Soho. A full beard and makeup! I'm not sure what they do for work, probably arts and culture, but that's really, for my generation of gay men, that's bonkers. For a younger
generation, it might be fine. (Anthony,consulting,64) In the above, Anthony reads the embodiment of masculinity and femininity on a gay male body as disrupting gender norms. Notably, Anthony stereotypes an older and younger generation as less and more willing to experiment with the embodiment of gender, which creates a restrictive binary in how older and younger gay men can be understood as gendered and (un)fit for the world of work. Some participants also expressed fear and anxiety that their own mannerisms, speech and appearance could appear feminine. Here, ageing emerged as a problematic social process, where participants shaped their masculinity practices to avoid embodying age-specific, gay-negative stereotypes.
My worst nightmare is if people saw me as an old queen. We're in a different world now … but just like a straight man wouldn't wish to be known as a dirty old man, I wouldn't wish to be seen as that old queen irritating everyone … (Anthony, consulting, 64) The fear of being labelled as an 'old queen' exposed the precariousness of managerial status gains by older gay men who reproduced hegemonic masculinity. The participants sometimes justified their worries about being stereotyped in this way by referring to older straight men who were anxious about similar age-stereotyping (e.g., the dirty old man). Yet unlike the 'dirty old man' label, which may reference unacceptable sexual behaviour (e.g., sexual harassment), the 'old queen' label is drawn from sexist stereotypes to penalize allegedly hysterical older gay men.
Some of our participants tried to avoid drawing attention to their perceived disadvantaged position as older gay men by distancing themselves from workplace gender issues. In particular, participants considered openly arguing for change, such as discussing and challenging LGBT and gender inequality issues, might incur costly career repercussions.
Ageing isn't particularly impressive … and if you're away with the fairies, talking about social issues like gender pay gap or something, rather than doing a good job earning money for the company, you're toast, the year-end bonus is gone. (Thomas,banking and finance,57) In many ways, the participants not only maintained the dominance of heteronormativity in their organizations, but also used it to shore up their own power. However, it was not the case that all study participants understood change-oriented action as detrimental to their careers, as the next section reveals.
| Embodying change
In the data, some older gay men's masculinity practices entailed embodying change. The practice of embodying change refers to agentic efforts to leverage a lifetime of accumulated experiences as gay men to negotiate heteronormativity in ways that facilitate change, although such agency is constrained by individuals' material and symbolic commitments to heteronormativity. The participants' desire to defend their power and privilege in the workplace undermined their capacity to seek transformative change.
Some participants believed that embodying change involved making hard choices, because a direct challenge against organizational heteronormativity seemed untenable in their work contexts. In particular, some practised a cautious and calculative masculinity, which prioritized reconciling competing interests in lieu of openly questioning unfair organizational practices. Participants' embodied agency was shaped by the social implications of ageing and sexuality, which fostered a cautious and defensive approach to change. As a finance manager working for a commercial bank averred: What I bring to the table is an exceptionally analytical approach. I go into meetings super well-prepared, armed with facts and figures, to protect myself from those who'd like to take me down a peg or two, whether it's because of their prejudices or our competitive culture … When it comes to diversity, with ageism and homophobia, I could be emotional about it, but that's the easy way out. It's better to win people over with carefully presented evidence which I think beats all the emotional speechifying and posturing.
(John, banking and finance, 58) Some participants indicated that they took care to interact with older straight men at work, many of whom were positioned as the traditional power brokers in their organizations. They preferred to change their organizations from within and slowly by being measured, strategic and political. As a structured finance senior associate working for a global accountancy firm suggested: … it's about making the right steps, being considered and sure when you approach the difficult issues, it takes a lot of minute politicking to create the right culture. (Alistair,accounting,53) Some participants also thought being 'older and wiser' meant that they gained a degree of credibility, which countered some of the stigma attached to being older and gay. They considered ageing as an asset that facilitated them to engage with organizational leaders more persuasively. For example, some of the participants suggested that they utilized their age and experience as a track record to claim leadership roles in high-profile initiatives that championed LGBT workplace issues. Sometimes, participants seemed to negotiate ageing as a competitive strength instead of a liability. As a managing partner employed at a legal firm put it: I don't see age as a problem at all. If anything, it gives you credibility. Gay men actually gain credibility as they get older. It's easier to persuade people that I know what I'm talking about, that I've been there and done that, and when I say this is a worthwhile action, they can trust that I've looked at the costs, and weighed the risks, and that it's experience speaking, and that they can rely on what I say. (Benjamin,law,65) As well, participants' embodiment of change was shaped by the resources available to them in their organizations. Despite pressures to conform to hegemonic masculinity, as men in senior managerial and professional roles, the participants exercised organizational power by using levers of influence to push for change. LGBT people in senior leadership, then the organization will change anyway, so that's job done … So I seek indirect change, but it's real change, and I am doing it without having to compromise my own power base.
(John, banking and finance, 58) John suggests that change is an incremental process that must be pursued prudently. His idea of change relies on a critical mass of LGBT people in senior positions being able to dismantle the heteronormativity of his workplace, aided by his relationship building with key figures in the organization. However, his notion of and strategy for change assumes that greater representation of LGBT individuals in senior positions will inevitably alter the organization, which fails to consider how LGBT managers and professionals may have no interest in contesting organizational heteronormativity, especially if they wish to protect their status and capacity to exercise power.
For some older gay men, the practice of embodying change involved negotiating ageing by way of their political competence, shaped by an intimate knowledge of the wider social and cultural landscape. These participants came of age during the gay liberation movement (from the late 1960s to mid-1980s) and some of them referred to a notion of social progress as linear, which infused their life histories and offered them skills to practice their masculinities differently. A partner in a boutique management consultancy firm recounted: In some of the participants' accounts, references to physical toughness and robustness featured prominently, and were emblematic of masculinity practices to counter the constraining effects of ageing. These participants often tried to project a mix of physical might and experiential knowledge, which allowed them to fit in with the hegemonic masculine ideals of their organizations, even as they attempted to challenge them.
Look, don't get me wrong, I'm not a spent force. I'm still the same man I was ten years ago. I can still work like a machine, and I do work like a maniac sometimes, but it's not a physical energy issue, it's about my sanity … I push the gender agenda … but I wouldn't go round telling male colleagues how to behave. I know what works, and that wouldn't work. (Martin,consulting,60) Some participants made use of their political know-how to contest inequalities via their organizational acumen, which led them to favour compromise, engagement and persuasion. They channelled their political competency to change hearts and minds, practising masculinity by trying to change the silhouette of hegemonic masculinity, while actively minimizing confrontation. Deploying Connell's (1995) concept of practice as body-reflexive social action, our research demonstrates how older gay men practice masculinities by utilizing their bodies as a site for reproducing heteronormativity, as well as negotiating heteronormativity to argue for change. Prior research shows that social expectations surrounding masculinity can generate multiple disadvantages that hamper LGBT workers' careers (e.g., Drydakis, 2015;Stenger & Roulet, 2018), and burden their workplace social relations (e.g., Hoel et al., 2014;Rumens & Kerfoot, 2009), continually marking them as the unsuitable other (e.g., Benozzo et al., 2015;Rennstam & Sullivan, 2018). Yet, this literature neglects the imbrications of masculinity and embodiment, as if ageing or other social processes involving the body have no significant bearing on masculinity. As we show, masculinity as a bodily practice (re)shapes the contours of exclusion and marginalization in organizations.
| DISCUSSION
In this article, ageing is a social and embodied process with enabling and constraining effects on how older gay men operate in heteronormative work contexts. In particular, ageing can shape how older gay men practice masculinity to evade negative age-related, heterosexist stereotypes (Rumens, 2018). Yet, in so doing, they can sustain the heteronormative status quo that subjugates them. Our findings suggest that some gay men vie for and achieve authority in their organizations by embodying hegemonic masculinity, which is characterized by marginalization and exclusion towards femininities (see also Barrett, 1996;Kerfoot & Knights, 1993, 1998. For example, older gay men reproduce a gender hierarchy within the sexual category of 'gay man' to assume a position of legitimacy and superior status by actively marginalizing gay femininities as undesirable in the context of professional work. This is a meaningful elaboration of the concept of hegemonic masculinity because it allows us to account for the continuing resonance of organizational heteronormativity in a seemingly more hospitable, 'gay-friendly' culture towards LGBT people (Giuffre et al., 2008;Rumens, 2018).
Our participants draw significant utility from experiences, skills and acumen acquired through a long personal history of LGBT activism, signifying ageing as an accumulation process (Riach & Cutcher, 2014). Using their accumulated, embodied hegemony and power, which is a legacy of their age, older gay men argue for change that improves the position and experience of LGBT workers, although they do so without fully undermining heteronormativity. On the one hand, their age and accumulated power in organizations affords them the ability to exercise change-oriented agency in ways that younger gay men may find difficult to perform. On the other hand, ageing can also attune some older gay men to a preference for compromise, risk aversion and preserving power and status. One potential outcome here is that the type of change favoured by older gay men tends to accommodate the interests and needs of LGBT workers (e.g., developing LGBT-supportive policies, sponsoring LGBT events, etc.), falling short of change that seeks to transform the organizational structure of heteronormativity.
Advocating the latter, which entails practising masculinity to expose power relations and individuals who uphold hegemonic masculinity and heteronormativity in the workplace, may jeopardize the status and privilege some older gay men have accumulated.
While practice is at the core of masculinity, hegemonic masculinity is a structural effect that operates above the individual level (Connell, 1987(Connell, , 1995Connell & Messerschmidt, 2005). In this study, to examine hegemonic masculinity through the analysis of older gay men's practices, we engaged with Reed's (2003) relationist perspective on the interplay of agency and structure. From that perspective, we observed that older gay men's masculinity practices had constitutive capacity, as we traced their regenerative effects on heteronormativity. Furthermore, the beliefs, rules and values that inhered in the heteronormative order informed the older gay men's masculinity practices, owing to the participants' material and symbolic embedment within their organizations (see Reed, 1985).
In particular, older gay men's change activity hinged on what was normatively imaginable and practicable in their particular social world, and thus dependent on the extent to which the pre-existing instantiation of organizational heteronormativity allowed space for change around LGBT workplace issues.
Although our findings square with Reed's (2003) theorizing on agency and structure, they nonetheless indicate that relationism can benefit from explicitly incorporating embodiment in the process of individual agency. As we showed, older gay men's bodies can play a part in expanding and contracting the scope for practising masculinity, figuring as a dynamic feature of our participants' orientation toward heteronormativity. Our research extends Reed's (2003) relationist theorizing by accounting for the body as key to agentic effects in relation to structure. We emphasize the need for a conceptual shift away from considering the body as a prop that people utilize variously to perform agency, because as we show in this study the body can be the primary social medium of agentic action in organized life.
Our research highlights the need to better account for change-seeking actions of LGBT workers. The organization sexualities literature provides significant insights into how heteronormativity constrains LGBT workers (e.g., Rumens & Kerfoot, 2009;Ward & Winstanley, 2006). Yet, we need more in-depth and body-informed accounts of how LGBT workers can problematize hegemonic masculinity to undo organizational heteronormativity. Our findings act in a corrective fashion, bringing hegemonic masculinity, age and sexuality to the foreground through an embodied practice perspective to explore their interplay in the context of organizations.
Furthermore, our study underscores the importance of reading ageing and sexuality into the organization in order to fully account for the gendered realities of older gay men's work lives. We submit that organization masculinities scholarship in 'naming men as men' (Collinson & Hearn, 1994) is limited by its patchy attention to ageing (Riach & Cutcher, 2014) and sexuality (Rumens, 2014). Our findings explicitly link organizational gendering processes to older gay men's practices of masculinities. The reproduction of organizational inequalities involves the actions of a wide array of organizational members, encompassing more than middle-aged, heterosexual men that historically constituted the core object of study for organization masculinities literature. As our analysis shows, organizational heteronormativity can strongly (dis-)advantage some older gay men in managerial and professional positions, reinforcing the importance of scholarly accounts of hegemonic masculinities that extend analyses of gendered power and privilege at work to include age and sexuality.
| CONCLUSION
We find both rigour and limitation in studying a specific group of older gay men in heteronormative work contexts.
As Connell and Messerschmidt (2005) state, practices of masculinity can only be studied fruitfully by analysing a distinctive group of individuals' social actions in a particular context, because gender and sexuality relations are contextually contingent in how they shape individuals' opportunities for social action. Yet, this also means that our findings must be considered within the work contexts of our participants who have specific characteristics (affluent older white gay men). Our participants' masculinity practices do not represent the experiences of all older gay men, and we emphasize that focusing on racialized older gay men in organizations in future research would further enrich our understanding of ageing and sexuality in organizations.
As we reflect on our research, we consider some of our participants' striking sense of entitlement to power and position as an embodiment of class privilege. In personal interactions, the participants' self-image often appeared unencumbered by doubts about their worth and right to career advantage, as they seemed to suggest that they were special people who possessed top talent, and they deserved exceptional corporate success. A more mixed sample reflecting marginalized classes could surface different perspectives on how well-educated senior professionals and managers view privilege in organizations.
Future research is also needed to explore how hegemonic masculinity is practised by other gay men in more diverse work contexts, especially those organizational settings where the relationship between hegemonic masculinity and heteronormativity might be open to contestation such as in public services and the arts.
Lesbians, bisexual women and men, trans and queer workers are likely to experience the interplay between ageing, sexuality and masculinity differently. For example, older men transitioning to the identity category of 'woman' may find they abandon practices that comply with hegemonic masculinity at work .
Additionally, older lesbians may practice masculinity and femininity in ways that conform to or contest wider social, heteronormative expectations and ideals about how women should look and behave. This area of scholarship is empirically open and it is our hope that scholars will undertake research that builds upon extant studies of ageing, masculinity and sexuality in organizations. Masculinity as embodied practice contributes to both possible change and continuity in heteronormative orders. As such, it is vital to continue exploring the interplay between various masculinities and the specific instantiations of heteronormativity in different organizational contexts.
DECLARATION OF CONFLICTING INTERESTS
We have no conflicts of interest to declare. | 2020-05-21T09:17:48.915Z | 2020-05-16T00:00:00.000 | {
"year": 2020,
"sha1": "1667dbabfcb13e434b1b459956dc500b7fad8783",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/gwao.12469",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "30e4f41e77719dce0e8a2b919e13796804231e88",
"s2fieldsofstudy": [
"Sociology",
"Business"
],
"extfieldsofstudy": [
"Sociology"
]
} |
10184819 | pes2o/s2orc | v3-fos-license | Mind the gap between policy imperatives and service provision: a qualitative study of the process of respiratory service development in England and Wales
Background Healthcare systems globally are reconfiguring to address the needs of people with long-term conditions such as respiratory disease. Primary Care Organisations (PCOs) in England and Wales are charged with the task of developing cost-effective patient-centred local models of care. We aimed to investigate how PCOs in England and Wales are reconfiguring their workforce to develop respiratory services, and the background factors influencing service redesign. Methods Semi-structured qualitative telephone interviews with the person(s) responsible for driving respiratory service reconfiguration in a purposive sample of 30 PCOs. Interviews were recorded, transcribed, coded and thematically analysed. Results We interviewed representatives of 30 PCOs with diverse demographic profiles planning a range of models of care. Although the primary driver was consistently identified as the need to respond to a central policy to shift the delivery of care for people with long-term conditions into the community whilst achieving financial balance, the design and implementation of services were subject to a broad range of local, and at times serendipitous, influences. The focus was almost exclusively on the complex needs of patients at the top of the long-term conditions (LTC) pyramid, with the aim of reducing admissions. Whilst some PCOs seemed able to develop innovative care despite uncertainty and financial restrictions, most highlighted many barriers to progress, describing initiatives suddenly shelved for lack of money, progress impeded by reluctant clinicians, plans thwarted by conflicting policies and a PCO workforce demoralised by job insecurity. Conclusion For many of our interviewees there was a large gap between central policy rhetoric driving workforce change, and the practical reality of implementing change within PCOs when faced with the challenges of limited resources, diverse professional attitudes and an uncertain organisational context. Research should concentrate on understanding these complex dynamics in order to inform the policymakers, commissioners, health service managers and professionals.
Background
The increasing prevalence of long-term conditions is acknowledged as an important challenge for healthcare services globally. [1,2] The need to care for those with long-term disease in an ageing population places considerable demands on existing health and social care resources.
Respiratory conditions, currently responsible for 7% of deaths in the UK, [1] are predicted to become one of the leading five causes of chronic ill health globally by 2020. [3] Chronic Obstructive Pulmonary Disease (COPD) is responsible for one in eight emergency admissions to hospital, [4,5] Following two high-profile reports which highlighted the need for personalised, structured and integrated care for people with COPD in order to manage the disease burden more effectively, [5,6] a National Service Framework (NSF) has been commissioned.
In the UK, a number of policies have been introduced to address the challenge of caring for people with long-term conditions. Learning from US managed care programmes, the long-term condition pyramid (LTC pyramid) is suggested as an important framework for designing services, [7] with community matrons providing case management for people with complex needs at the top of the pyramid (see figure 1). The Quality and Outcomes Framework of the General Medical Services contract aims to improve primary care standards, [8] and investment in Expert Patient programmes and health literacy support self-care at lower levels of the pyramid. [9] 'Care Closer to Home' is widely promoted as offering a cost-effective alternative to expensive hospital treatment, with specific initiatives such as Hospital at Home schemes, and GPs with special interests (GPwSIs), seen as important components of intermediate care services integrating primary and secondary care. [2,5,7,10] In England and Wales, PCOs are charged with the responsibility to Pyramid of care for long-term conditions commission services to implement these policies according to local need. Although reviews of the evidence on diffusion of innovation in the health service, [11] and summaries of advice on achieving organisational change in the NHS are available, [12] there is a need to understand how policy is implemented in practice amidst current changes and reorganisations within the NHS.
Our study aimed to investigate how PCOs (i.e. Primary Care Trusts in England and Local Health Boards in Wales; freestanding statutory NHS bodies with responsibility for delivering healthcare and health improvements to their local areas) reconfigure their workforce to develop respiratory services and to meet the needs of people with longterm conditions. Our previous work suggested that up to a third of PCOs were considering including GPwSIs in their respiratory service, [13] based on evidence that they can safely provide care for a proportion of patients otherwise referred to secondary care, [14] and that clinical outcomes are similar, with patients often equally or more satisfied with the service. [15][16][17] Our study, therefore specifically aimed to study the development of a GPwSI-centred service models within the context of other (often nurse-led) models.
We here report the first phase of the study in which we explored the context, drivers, barriers and facilitators to respiratory service reconfiguration in a purposefully selected sample of PCOs in England and Wales, representing a broad spectrum of attitudes and levels of development in the reconfiguration of respiratory services. This 'baseline' phase had the dual objective of enabling us to select four PCOs for in-depth case study (to be reported in due course) and also of providing the broad context for further evaluation.
Methods
This study was undertaken with the ethics approval of the Southeast Multi-Centre Research Ethics Committee and governance approval from all participating PCOs. [18] All participants provided informed consent.
We recruited a purposeful sample of PCOs, representing a broad spectrum of potentially relevant factors and influences, including demographic and geographic profile, existing or planned models of community-based respiratory care. As the primary interest of our study was the role of GPwSIs, we specifically sought a number of PCOs with GP or GPwSI involvement in reconfiguring respiratory services. Our initial selection was based on our knowledge of PCOs' intentions from a previous survey, [13] and on expressions of interest received in responses to the publication of the General Practice Airways Group Respiratory GPwSI resource pack. [19] These were supplemented by snowball sampling to identify PCOs reputed to have in place or be planning novel models of care.
At the time of the interviews there was a total of 330 PCOs, however we were aware of imminent mergers, which subsequently reduced the number of PCOs to 110. We took this into consideration, when recruiting in order, for example, to avoid overlap where PCOs were already working closely with their prospective partners.
We approached PCOs by letter, followed up by a phone call, requesting a 45 minute telephone interview with the person(s) responsible for driving the reconfiguration of respiratory services or, in the case of PCOs not planning reconfiguration of respiratory services, the person responsible for other comparable chronic disease services in the PCO. We planned to recruit until we identified no new models of care and were satisfied we had reached data saturation.
Based on our previous work, [13,20] and our understanding of current policies and discussions relating to the management of long-term conditions, [2,5-7,10,21] we devised a semi-structured interview schedule, collecting data on size and demographics of the PCO, financial and organisational context, the current priorities, preferred model of care for respiratory disease, key drivers, barriers and facilitators (see Additional file 2, Appendix 1 for the full schedule). The topic guide was reviewed by the multidisciplinary team in an iterative process as the interviews progressed.
The interviews were conducted by one researcher (AT) who made extensive field notes on pre-structured forms. Interviews were audio-recorded (apart from interviews 1 and 2 because of technical problems) and fully transcribed. Analysis of the interview data was undertaken by two researchers (SH and HP) using the thematic method described by Zeibland et al. [22] Emergent themes were discussed by all members of the multidisciplinary team during project meetings and workshops.
Participants
We sent a postal invitation to 110 PCOs between February and June 2006; 40 agreed to consider our request. After gaining permission from line managers, 30 identified a suitable person for an interview. The demographic details, merger and financial status of the PCOs and the professional role of the interviewees are summarised in table 1 (see additional file 1).
Models of care
Within the 30 sampled PCOs, we identified a range of respiratory service models, often including a combination of approaches, with multidisciplinary teams providing a respiratory service. We reached saturation in terms of the service models identified.
In summary, we have categorised these models according to the main focus of the model, as described by the interviewee's description.
• Nine PCOs specifically involved GPs, either as GPwSIs or as less formal arrangements with local 'interested GPs' • Five were developing, or considering developing, respiratory GPwSI services.
• Sixteen had, or were developing, a role for community matrons in COPD care.
• Fifteen were nurse-led models, and a further seven included nurses in multi-disciplinary respiratory teams.
• Three were developing models incorporating consultants working in the community.
• Two PCOs were not prioritising respiratory care.
The models were in varying stages of development and implementation at the time of the interviews, but the fluidity of the process, and variability between different aspects of reconfiguration within individual PCOs made it impossible to give a meaningful indication of the phase of development.
Throughout the interviews, the impact of change emerged as an important theme, which in many cases, was discussed in terms of a positive/negative dichotomy, both driving and impeding development. Reconfiguration of respiratory services was discussed within the context of the changing environment of the NHS in England and Wales, as at the time of the interviews, many of the Primary Care Organisations were merging, and/or undergoing structural reorganisation. Change impacted on all stages of respiratory service development from the initial drivers through the design phase to the implementation. We identified three phases of change and model development (summarised in figure 2): 1) Drivers for change, 2) Designing new models of care, and 3) Implementing change.
I Drivers for change
Central policy Many interviewees described the primary drivers to redevelopment as being central policies, particularly on shifting care into the community, the proactive management of long-term conditions and broadening of professional roles. The impending PCO mergers and commissioner-provider split provided a fluid and uncertain context for these changes.
"..again I think PCO initiatives seem to be driven from central government which, you know, is understandable to a certain extent but the nature is that it tends to, unless you're very different and you're very enthusiastic you'll find that to implement any change is extremely difficult." (PCO 14: GPwSI service, Interviewee: GPwSI) "...with the focus on cutting out-patients and particularly follow-ups there is a, the Trust has been put under, the consultants have been put under pressure themselves and so they're desperate for solutions. And so when we came along with some solutions, they were very keen to listen." (PCO 6: Respiratory nurse service, Interviewee: Commissioner)
Local need
Recognition that change was needed to enhance local patient care was another important driver. Several PCOs were investing time and money in exploring local need with scoping exercises, or audits of service use, and a few were commissioning interviews and focus groups to help them understand the patients' perspective. Some PCOs valued the input of local practitioners as a means of gauging patient needs, though others were concerned that clinical perspectives might not always reflect those of patients.
"Actually I think we have a very lively input from patients that we've made sure that [the] patient voice is at the centre of this. Our patients have said to us what is important for them and our service development group has made that a key priority. We've had some effort to encourage practices to take onboard a patient, a patient advisor." (PCO 8, Respiratory nurse service, Interviewee: Service development manager) "...my own driver is really an interest in respiratory because I feel that as a group of patients over the years with the way that the primary care has gone certainly we've had NSF for coronary heart disease and diabetes, those who have respiratory problems have sort of been neglected to a sort of second division and I feel that that's particularly unfortunate given the huge amount of morbidity that's around with regards to respiratory disease..." (PCO 14: GPwSI service, Interviewee: GPwSI)
Financial balance
The imperative to achieve financial balance was frequently cited as a driver for change. Budgetary and resource restrictions both drove service redesign by imposing a need for cost saving alternatives to hospital admissions, and acted as a major barrier as plans were shelved to save money.
"Well the top priority, I am sure you are going to hear this everywhere, is financial, absolutely nothing to do with redesign, but that is the absolute top." (PCO 3: GPwSI-led team, Interviewee: Commissioner) "...although current changes are said to be clinically led, the truth is they aren't. There's a significant gap between rhetoric and reality, which leaves clinicians exasperated, because their commitment to the well-being of their patients comes second to economic and political forces." (PCO 14: GPwSI service, Interviewee: GPwSI)
II Designing new models of care
Financial strategy Almost all interviewees spoke of how financial restrictions impacted on the design of respiratory services. In some cases there was insufficient funding to develop a desired service: in others service development proceeded successfully only to have progress (often suddenly) aborted due to removal of funding. Models were often chosen because of their cost saving potential. In some cases these were not the preferred models, however financial restrictions did not allow for the more expensive (yet considered potentially better) model of care. Specifically, a GPwSI service was often rejected as being too expensive in relation to other options. Sometimes the choice of model was dictated by the presence of a funding stream for a specific model of care (for example: charity funding to start up an asthma education project for parents, pharmaceutical company sponsorship for pulmonary rehabilitation, or funding for initiatives to attract GPs to under-doctored areas used to support GPwSI training).
"And there was some LDP [Local Delivery Plan] money which was put aside for chronic disease management which, fortunately for me, wasn't earmarked for any specific project and, so, what we did, we have a clinical reference group for respiratory diseases which covers all areas of the health economy and we put together a business plan basically which identified that we, we think we can reduce emergency admissions by 30% or more Summary of the phases of change and model development
Phase Themes
Central
Professional interests
The presence of professional support or opposition was highlighted as an important factor influencing choice of model redesign. Some interviewees described how clinicians from primary or secondary care could actively "champion" preferred models or conversely how opposition (for example from consultants) could mean that certain choices were avoided. Examples were cited where the narrow perspective of a professional had restricted the possibilities of developing new ways of working, and PCOs had subsequently adopted strategies to counterbalance vested interests. More practically, availability of an individual with professional expertise and interest could determine whether a GPwSI or specialist nurse service was selected.
"And fantastically the consultants, you know, they send me articles they see in Thorax
Uncertainty about PCO reorganisation
Many interviewees commented that the chaos and uncertainty associated with the imminent PCO reorganisation acted as a major block to effective development. Instability and lack of job security within PCOs due to the impending reorganisation meant that managerial positions remained vacant causing the planning process to stall. By contrast, however, several interviewees spoke positively of the potential for expanding their successful respiratory services to their future partner PCOs, or spoke optimistically of an opportunity to develop a new service.
Discussion
Against a backdrop of uncertainty due the impending reorganisation and, in some cases, large financial deficits, the PCOs in our study sought to marshal their resources to develop new services to meet the increasing needs of a population with long-term respiratory conditions. Although the primary driver for this reconfiguration was consistently identified as the central policy to shift care for people with long-term conditions cost-effectively into the community, the design and implementation of new services was subject to a broad range of local and at times serendipitous influences which could, and often did, derail the process. Some interviewees described teams of clinicians and managers able to balance policy requirements and local needs in order to develop innovative care, albeit limited by financial restrictions and often with an uncertain future. Most, however, highlighted the many barriers to progress describing initiatives suddenly shelved for lack of money, progress impeded by reluctant clinicians, plans for reducing hospital care thwarted by 'Payment by Results' and a PCO workforce demoralised by the upheaval and job insecurity of a merger. For many of our interviewees, there was a large gap between policy rhetoric and practical reality.
Limitations and strengths
Our participants may not have encompassed the full range of contexts in PCOs in England and Wales, however, we purposefully sampled trusts with a wide geographic and demographic spread and a range of proposed respiratory service models and in an attempt to minimise this risk we continued to recruit until saturation was reached. The 30 PCOs who agreed to participate may have been the most enthusiastic about reconfiguring services, however, our purposive sampling included one PCO with no intention to develop a respiratory service and several with very limited plans. In addition, the models described by the participants echoed those identified by a national survey. [13] Our data are derived from a single interview in each PCO, and although we standardised our requests to PCOs, asking to speak to the person responsible for driving the reconfiguration of respiratory services, some interviewees may not have been fully aware of the situation in their PCO. The interviewees had a range of clinical and/or managerial roles, and we recognise that their answers and perceptions will have reflected their individual perspectives. Interviewees may have omitted to mention some issues, though we used a structured topic guide to ensure that we asked specifically about relevant issues.
A major strength of the study is the multidisciplinary expertise (clinical, health service management, anthropological) available within the study team, ensuring balanced conclusions. We continued interviews until we reached saturation with regard to models of care.
Interpretation of findings in relation to previously published work
Although the approach varied, almost all the developments described by our interviewees addressed the complex needs of patients at the top of the LTC pyramid, and focussed predominantly on reducing admissions. [2,7] Even if predictive models can accurately identify 'at risk' patients, [24] a narrow focus overlooks the importance of ensuring early diagnosis and strengthening disease management and supported self-care for those at lower levels of the pyramid to prevent progression and future escala-tion of care needs, [5,6,25] and perpetuates some of the limitations of the reactive approach to acute care. Short term planning (often no further than the end of the current financial year), limited resources and the uncertainty of imminent PCO reorganisation were amongst the factors identified by our participants as barriers to developing broader strategies.
Integration across primary and secondary care, and enabling collaboration between multidisciplinary teams of healthcare professionals, are enshrined in policy, [26][27][28] widely advocated in discussion, [5,[29][30][31][32] and supported by some evidence. [33,34] The few PCOs in our study with multidisciplinary teams in place integrated between the acute sector and the community seemed better placed to address all levels of the LTC pyramid with their planned respiratory services, providing some support for the fundamental importance of multidisciplinary coordination of care in realising the potential for improved patient care. [34,35] Our data identify a significant gap between aims and desires at the policy level, and how services are designed and implemented at ground level. Whilst policies were described as significant drivers of change, our interviewees discussed many other important factors impacting on practical service reconfiguration. The shape and effectiveness of service development are influenced by perceived local patient need, professional attitudes and workforce issues such as availability of potential GPwSIs. Development proceeds in an environment overshadowed by uncertainty and financial restrictions. The manner and success with which PCOs translate the aspirations of policy into reality appear to be very variable. As a result, services can look very different to users from PCO to PCO, potentially raising concerns about inequity. There is a need to understand why some trusts succeed in reconfiguring services despite the challenges whilst others flounder, in order to inform policymakers, commissioners, health service managers, professionals, and educationalists about effective strategies to implement policy. [36,37] This paper is a descriptive piece providing broad, baseline context for further in-depth evaluation in subsequent phases of our study. The models we identified can be defined as innovations in health care: 'i.e. novel sets of behaviours, routines and ways of working, which are directed at improving health outcomes, administrative efficiency, cost-effectiveness or the user experience, and which are implemented by means of planned and co-ordinated action'. [38] Uptake and implementation of health innovations are highly context dependent, and the planning and development of the models described by our interviewees was indeed subject to a range of contextual factors such as the availability of funds, the presence of one or more 'champions' to take the lead in development, the negotiation of local professional interests, and availability of trained workforce. The models which emerged were products of context, which shaped a process of local negotiations about the mechanisms which would best realise the policy ideal of shifting care into the community. [39] There was also often an element of serendipity in the process, with a chance coming together of key factors to create or impede change. [40] Our findings resonate with a number of recognised theories of innovation and change management. We observed the described tension between centrally driven innovation and local adoption of 'good ideas', [11] and the paradox that the context, far from being a 'confounder' is integral to the implementation of complex innovation. [41] Our data exemplify the maxim that organisational change is subject to a range of variables which interact to influence outcomes. [11,40] The crucial significance of 'relative advantage', i.e. the need to identify models which offered advantages to all clinicians and managers who needed to be involved in development, was apparent as healthcare professionals impeded change that they perceived may be disadvantageous. [11] Champions are recognised as key determinants of organisational innovation, [11] echoing our interviewees accounts of how local professionals had successfully championed developments in their PCO. Such theories can provide insight into how the complex dynamics in some PCOs enable change to occur, whilst impeding change in others.
Conclusion
Whilst some PCOs seemed able to overcome the challenges of organisational fluidity and financial constraints in order design and implement new services for people with long-term respiratory disease, the resulting services were largely directed at reducing admissions amongst the small number of people with complex needs. For many PCOs the barriers of financial deficit, organisational uncertainty, disengaged clinicians, and contradictory policies presented insurmountable barriers to the effective development of sustainable services. In other PCOs these barriers were being overcome and new models of care successfully developed, although their sustainability in the shifting organisational context at the time of the study was in question. Research should concentrate on understanding these complex dynamics in order to inform policymakers, commissioners, health service managers and professionals of effective strategies to implement change.
Abbreviations
Many of these explanations are based on, or reproduced with permission, from the NHS Jargon Buster: Version 2 (February 2008) Updated online at http://www.impress resp.com Acute Trust: A legal entity formed to provide health services in a secondary care setting.
Community Matron: When a patient has a number of long term conditions and complex needs, their care becomes more difficult for them to manage. Case Management is where a named coordinator, e.g. a Community Matron, actively manages care by offering continuity of care, coordination and a personalised care plan for vulnerable people most at risk.
COPD: Chronic Obstructive Pulmonary Disease GP: General Practitioner Family doctor. Patients in the UK access healthcare through the GP practice with whom they are registered.
GPwSI: General Practitioners with a Special Interest. Practising GPs with a special expertise in (respiratory medicine) whose role often includes in service development as well as clinical care.
LDP: Local Delivery Plan. A 3 yr plan that every PCO prepares and agrees with its Strategic Health Authority (SHA) on how to invest its funds to meet its local and national targets, and improve services. It is a public document which provides an overview of PCO priorities, and how it intends to manage its resources.
LTC: Long-term conditions. Illnesses which lasts longer than a year, usually degenerative, causing limitations to one's physical, mental and/or social well-being. Symptoms may come and go, and usually there is no cure, but there are things that can be done to maintain or improve the person's quality of life and wellbeing. Long Term Conditions include Diabetes, COPD, Asthma, Arthritis, Epilepsy and Mental Health.
LTC pyramid: A pyramid with three levels of professional and self-care widely adopted as a model of service provision for people with long-term conditions. It is based on categorising care according to risk stratification.
NHS: National Health Service. The publicly funded healthcare system in England, Scotland, and Wales.
NSF: National Service Framework. These NHS documents set national standards for the provision of care for a range of disease areas.
PbR: Payment by Results. How secondary care providers in England are now paid. There is a national fixed tariff for emergency care, elective in-patients, day cases and outpatients bought by NHS commissioners. The important | 2018-04-03T05:38:48.727Z | 2008-12-04T00:00:00.000 | {
"year": 2008,
"sha1": "60228c42c815871e0fdbcb34ae064c56256e564c",
"oa_license": "CCBY",
"oa_url": "https://bmchealthservres.biomedcentral.com/track/pdf/10.1186/1472-6963-8-248",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "387d540cb24c5342585acd761401f607193e2bc2",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
118630712 | pes2o/s2orc | v3-fos-license | Relativistic quantum clocks
The conflict between quantum theory and the theory of relativity is exemplified in their treatment of time. We examine the ways in which their conceptions differ, and describe a semiclassical clock model combining elements of both theories. The results obtained with this clock model in flat spacetime are reviewed, and the problem of generalizing the model to curved spacetime is discussed, before briefly describing an experimental setup which could be used to test of the model. Taking an operationalist view, where time is that which is measured by a clock, we discuss the conclusions that can be drawn from these results, and what clues they contain for a full quantum relativistic theory of time.
I. TIME IN QUANTUM MECHANICS AND GENERAL RELATIVITY
When an experiment is carried out, the experimenter hopes to gain some information about nature through her controlled interaction with the system under study. In classical physics, systems posess a set of measurable properties with definite values, which can in principle be interrogated simultaneously, to arbitrary accuracy, and without affecting the values of those properties. Any uncertainty in the measurements arises from some lack of knowledge on the part of the experimenter (for example due to some imperfect calibration of the aparatus) which could, in principle, be corrected. In quantum theory on the other hand, uncertainty relations between conjugate variables, and the necessary backreaction of the measurement on the system, combine to pose strict limits on the information which can be obtained from nature. Although there are nontrivial complications in defining time as a quantum observable (see the introductory discussion in [1], and Section 12.8 of [2], for example), it is nonetheless apparent that quantum restrictions must also be applied to its measurement [3][4][5].
The general theory of relativity (GR) lies within the classical paradigm with respect to the measurements that can be performed, though the outcomes of such measurements are affected by the observer's state of motion, and the distribution of energy around them. The theory is built upon the notion of "ideal" clocks and rods, through which the observer gathers information. In special relativity, an ideal clock is a pointlike object whose rate with respect to some observer depends only on its instantaneous speed, and not directly on acceleration [6]. The latter property is sometimes referred to as the "clock postulate", and can be justified by the fact that an observer can "feel" their own acceleration, in contrast to velocity. Therefore, given a clock whose rate depends on acceleration in a well-defined manner, one can simply attach an accelerometer to it, and use the resulting measurements to add/subtract time such that the acceleration effect is removed, recovering an ideal clock. Combining this clock postulate with the constancy of the speed of light, one finds that an ideal clock measures the proper time along its trajectory according to the usual formulas of special relativity. The concept of an ideal clock (and therefore proper time) is imported into GR via Einstein's equivalence principle [6]. This principle states that local experiments conducted by a freely-falling observer cannot detect the presence or absence of a gravitational field. Here "local" means within a small enough volume that the gravitational field can be considered uniform.
We note four conceptual issues which arise when combining GR and quantum theory. The first is understanding how quantum theory imposes constraints on the clocks and rods of GR, and how this in turn affects the information gathered by an observer. Here, we concern ourselves with clocks, and we refer the reader to [7] for a review of possible limitations to spatial measurements. Some progress has been made with this issue, for example [8], wherein the mass and mass uncertainty of a clock system are related to its accuracy and precision (neglecting spacetime curvature). In [9], using a gedankenexperiment, one such mass-time relation is rederived and combined with the "hoop conjecture" (a supposed minimum size before gravitational collapse [10]), to argue that the product of a clock's spatial and temporal uncertainty is bounded below by the product of the Planck length and the Planck time.
A second, perhaps more difficult problem, is that of reconciling the definition of time via a pointlike trajectory in GR with the impossiblity of such trajectories according to quantum mechanics (a result of the uncertainty principle between position and momentum). A third issue is the prediction that acceleration affects quantum states via the Unruh [11,12] and dynamical Casimir [13] effects (DCE), which in turn will affect clock rates [14]. One must therefore reconsider whether it is always possible to measure and remove acceleration effects and recover an ideal clock. Finally, the fourth issue is that, given the locality of the equivalence principle (i.e. it only holds exactly when we consider a pointlike observer), it is unclear to what extent it applies to quantum objects, which do not follow pointlike trajectories.
We investigate the interplay of these four issues, seeking to answer the following questions: what time does a quantum clock measure as it travels through spacetime, and what factors affect its precision? What are the fundamental limitations imposed by quantum theory on the measurement of time, and are these affected by the motion of the clock? To answer this question, we cannot in general rely on the Schrödinger equation, as we must use a particular time parameter therein, which in turn requires the use of a particular classical trajectory. 1 The relativistic clock model detailed in Section III gives a compromise; its boundaries follow classical trajectories, but the quantum field contained therein, and hence the particles of that field, do not. In Section IV we examine the extent to which this clock has allowed the four issues discussed above to be addressed, and possible future progress.
Given the difference in the scales at which quantum theory and GR are usually applied, one may ask what we expect to gain by examining their overlap. Our response to such a question is threefold. Firstly, we note that optical clocks have reached a precision where gravitational time dilation as predicted by GR has been measured over scales accessible within a single laboratory [15]. Indeed, modern clocks are precise enough that they are sensitive to a height change of 2 cm at the Earth's surface [16]. Given the rate of improvement of this technology (see Figure 1 of [17], for example), one can anticipate an even greater sensitivity in the near future. The detection of a nuclear transition in thorium-229 [18], proposed as a new frequency standard [19], means that we may soon enter an era of "nuclear clocks", surpassing that which is achievable with clocks based on electronic transitions. Considering this ever-increasing precision together with proposals to exploit quantum effects for superior timekeepking (e.g. [20,21]), we argue that a consideration of GR alongside quantum theory will become not simply possible, but in fact necessary in order to accurately describe the outcomes of experiments.
Our second response is to point out the possibility of new technologies and experiments. There are already suggestions exploiting the clock sensitivity mentioned above, such as the proposal to use changes in time dilation for earthquake prediction and volcanology [22]. On the other hand, there are proposals to use effects which are both quantum and relativistic in order to measure the Schwarzschild radius of the Earth [23], or to make an accelerometer [24], for example. See [25] for a review of experiments carried out or proposed which employ both quantum and general relativistic features. Beyond specific proposals, there are practical questions which we cannot answer with quantum mechanics and GR separately; for example, what happens if we distribute entanglement across regions with differing spacetime curvatures, or how do we correlate a collection of satellite-based quantum clocks? The answers to these questions are relevant for proposals to use correlated networks of orbiting atomic clocks for entanglement-assisted GPS [20], or to search for dark matter [26] Finally, there is a strong motivation from the perspective of fundamental science to investigate the nature of time at the overlap of GR and quantum theory. Beyond the intrinsic interest of finding a coherent combination of the two most fundamental theories in physics, a quantum relativistic conception of time may be of relevance when using quantum clocks to test the equivalence principle [27,28] and to single out GR from the family of gravitational theories obeying this principle [29], for example. In addition, since we expect a viable theory of quantum gravity to also be a quantum theory of space and time, it must either reproduce a relativistic quantum theory of time in the semiclassical limit, or contradict it, giving a potential test of the quantum gravity theory compared to the semiclassical one that we use here.
II. A SEMICLASSICAL APPROACH: QUANTUM FIELD THEORY IN CURVED SPACETIME
To answer the questions raised in Section I, a framework incorporating elements of both quantum mechanics and general relativity is needed. At high energies, one would need a full theory of quantum gravity to do this. However, if we only consider the energy scales accessible in current (or near-future) experiments, and where the spacetime curvature is relatively low, we can employ the semiclassical methods of quantum field theory in curved spacetime (QFTCS). It is semiclassical in the sense that quantum matter and radiation are embedded in a classical curved spacetime, the latter being subject to Einstein's equations. QFTCS also allows us to describe quantum fields from the perspective of non-inertial observers, leading to predictions of novel phenomena related to acceleration, namely the Unruh effect and the DCE, mentioned in Section I. The latter effect has been demonstrated experimentally [30,31], as we briefly describe in Section III E. It is worth underlining that these effects are both quantum mechanical and relativistic in nature, and cannot be derived by, for example, simply inserting a relativistic proper time into the Schrödinger equation of quantum mechanics. To fully include (classical) relativity into the quantum dynamics, one needs QFTCS.
In recent years, aspects of quantum information have been incorporated into QFTCS in a collection of research efforts known as relativistic quantum information. This has allowed, for example, investigations into the effect of spacetime dynamics [32,33] and non-inertial motion [34][35][36][37] on quantum entanglement, and the potentially detrimental [38,39] or advantageous [40,41] consequences of such motion for some quantum information applications.
A particularly fruitful branch of relativistic quantum information is the incorporation of quantum metrology into a relativistic setting [42,43], with a number of possible applications including the measurement of the Schwarzschild radius of the Earth [23] and the detection of gravitational waves in small-scale BEC experiments [44]. The application of relativistic quantum metrology to the measurement of time is the subject of Section III.
III. A RELATIVISTIC QUANTUM CLOCK
A. The clock model The clock model introduced in [45] allows us to integrate aspects of both general relativity and quantum mechanics. It consists of a particular mode of a localized quantum field; the boundaries confining the field define the spatial extent of the clock, and the clock time is given by the phase of a single-mode Gaussian state. This gives a clock that can undergo classical relativistic trajectories, but whose dynamics are described by QFTCS. The former property means that we can compare this to a pointlike clock by considering a classical observer following the trajectory of the center of the cavity, while the latter property allows us to consider the effect of the spacetime curvature on the whole extent of the quantum field, instead of relying on the Schrödinger equation. The transformation of the quantum state of a localized field due to boundary motion is a well-studied problem in flat spacetime [37,46], particularly the generation of particles due to the DCE [13]. Since the frequencies of the field modes depend on the length between the boundaries, one must be careful to choose the trajectories in such a way that the comparison with the pointlike classical clock is a fair one. One must also be careful to distinguish between classical effects arising purely from the spatial extent of the clock, and novel quantum effects due to mode-mixing and particle creation.
To analyze the effect of non-inertial motion and spacetime curvature on the clock, we first need to describe their effect on its quantum state, giving us the change in phase (i.e. clock time). Since the phase is subject to an quantum uncertainty relation with respect to the particle number (see [47], for example), a change in the state of the field will in general modify the precision with which the phase can be estimated. Once these changes have been determined, one can compare the overall phase with the corresponding classical result to find quantum relativistic shifts in the clock time, and one can see how the precision of the clock is affected by considering the change in phase estimation precision. Before discussing the results obtained using this clock model, we give a brief overview of the framework underpinning it.
B. Theoretical framework
A localized quantum field in curved spacetime
The simplest quantum field theory is that of the massless scalar field. This can be used, for example, to approximate the electromagnetic field when polarization can be ignored [48], or phononic excitations in a proposed relativistic BEC setup [49]. For simplicity, we consider one spatial and one temporal dimension. In a general 1 + 1D spacetime, the massless scalar field satisfies the Klein-Gordon equation [50] ◻ In some coordinate system (t, x), imposing the boundary conditions Φ(t, x 1 ) = 0 and Φ(t, x 2 ) = 0 for a given x 1 and x 2 , we describe either an electromagnetic field in cavity or the phonons of a BEC trapped in an infinite square well.
After finding a set of mode solutions to Equation 1, which we denote φ m (t, x), one can (under certain conditions, discussed briefly in Section III D) associate particles with the modes, and quantize the field by introducing creation and annihilation operators a † m and a m . These satisfy the usual bosonic commutation relations, a † m , a n = δ mn , and can be used to define the vacuum and Fock states in the usual way. The total scalar field is then given by If the field can be described in terms of a second set of mode solutions, we can relate these to the first set by means of a Bogoliubov transformation. Denoting the creation and annihilation operators associated with the new set of solutions by b m and b † m , the Bogoliubov transformation can be written as where α mn and β mn are known as the Bogoliubov coefficients, and can be computed using an inner product between the first and second set of mode solutions (see [50] for details). These transformations can be used, for example, to represent changes in coordinate system between inertial and non-inertial observers, or the effect of Gaussian operations or of spacetime dynamics. Mixing between modes due to the transformation is determined by the α mn , while the β mn correspond to the generation of particles. The fact that the β mn are non-zero for Bogoliubov transformations between inertial and non-inertial observers leads to the Unruh effect and the DCE.
The covariance matrix formalism
The relativistic clock model described in Section III A makes use of only a single mode of the field after the transformation. It is then very advantageous to work with the covariance matrix formalism, which greatly simplifies the process of taking a partial trace over field modes. In doing so, we restrict ourselves to the consideration of Gaussian states of the field. The set of such states is closed under Bogoliubov transformations. Defining the quadrature operators for mode n by X 2n−1 ∶= 1 2 a n + a † n and X 2n ∶= − i 2 a n − a † n , a Gaussian state is completely determined by the first moments q (n) ∶= ⟨X 2n−1 ⟩ and p (n) ∶= ⟨X 2n ⟩, and the second moments i.e. the covariance matrix To take a partial trace over some modes, one simply removes the corresponding rows and columns from the covariance matrix. Let k and σ (k) denote respectively a mode of interest and the reduced covariance matrix of that mode. Now consider some initial state with first moments q 0 . After a Bogoliubov transformation, the first and second moments are given by [42,43] with A single-mode Gaussian state is also characterized by the following parameters: the (real) displacement α, the (complex) squeezing ξ = re iφ , the phase θ and the purity P . These parameters can be expressed in terms of the first and second moments as follows
Relativistic quantum metrology
The field of quantum metrology developed in parallel to quantum information [51,52], and is concerned with the application of quantum features, such as squeezing or entanglement, to improve the precision with which some quantity is measured. Say we seek to estimate a parameter λ by making M measurements. The variance ∆λ of estimators of λ satisfy the quantum Cramér-Rao bound [51] ∆λ where H λ is the quantum Fisher information (QFI). We can therefore use the QFI to quantify the precision with which a parameter can be measured: a greater QFI implies a greater precision. We note, however, that the QFI is obtained by an unconstrained optimization over all generalized measurements [51], and as such gives the theoretical maximum precision, without any consideration of the feasibility of the measurement process required to achieve it. In recent years there has been an interest in using squeezed light to improve the sensitivity of gravitational measurements such as in the LIGO gravitational wave detector [53], and in atom interferometric measurements of gravitational field gradients [54]. Typically, proposals consider non-relativistic quantum theory and Newtonian physics, while others include some corrections due to GR [55]. In [42,43], quantum metrology was considered using QFTCS, giving a more fully relativistic application of quantum metrology. Applying these ideas, we consider λ to be encoded into the Bogoliubov coefficents, and thus into the matrices M mn given by Equation 6. From the corresponding transformation of the first and second moments (Equations 5), and the expression of the Gaussian state parameters in terms of these moments (Equations 7), one can see how the parameters encode λ. We apply quantum metrology to the estimation of the phase of a single-mode Gaussian state, i.e. λ = θ. The QFI for the phase, written in terms of the other Gaussian state parameters, is given by [56] H θ = 4α 2 P [cosh(2r) + sinh(2r) cos φ] + 4 sinh 2 (2r) 1 + P 2 .
C. The effect of non-inertial motion To describe an accelerating clock, one can make use of so-called Rindler coordinates. These coordinates are natural for describing accelerated observers in a number of ways. For example, an observer at any fixed spatial Rindler coordinate experiences undergoes a constant proper acceleration and has a proper time linearly proportional to the Rindler time coordinate. Furthermore, an extended object which is stationary in Rindler coordinates satisfies a number of desirable properties, including Born rigidity [57] and a constant "radar length" (the length as measured by the round-trip-time of a light pulse) [6]. By judiciously connecting together Rindler coordinates corresponding to different proper accelerations, the Bogoliubov transformation corresponding to a continuously varying (finite-duration) proper acceleration can be calculated [46]. For the results described in this section however, it suffices to join segments of constant proper acceleration with segments of inertial motion, as detailed in [37]. In [45], the effect of non-inertial motion on the clock time was investigated in the famous twin-paradox scenario. In this scenario, one clock remains motionless while another undergoes a round trip, and the stationary clock registers more time passing than the round-trip clock. The round-trip trajectory was composed of periods of constant proper acceleration a interspersed with periods of inertial motion (see Figure 1, and reference [45] for more details). The clocks were initialized in the same coherent state. First considering the purely classical deviation (i.e. in the absence of mode-mixing and particle creation), between a pointlike and a spatially extended clock, one finds a difference only during the periods of acceleration. During a period of proper acceleration Recalling that less time passes for the accelerated pointlike "twin" than the stationary one, we see from Equation 10 that the classical effect of the clock's nonzero spatial extent is to increase this disparity. If we now include modemixing and particle creation effects due to the motion, as determined by the Bogoliubov transformation, we find a non-trivial relation between the time as measured by the relativistic quantum clock model and a pointlike clock. This is illustrated in Figure 2 using experimentally feasible parameters for the superconducting quantum interference device (SQUID) setup discussed in Section III E. The left inset of Figure 2 shows the difference between the quantum clock and a pointlike clock, both with and without mode-mixing and particle-creation effects, as a function of the clock size L. The right inset gives the percentage of the effect due to particle creation alone, again as a function of the clock size. Particle creation being a purely quantum effect, this gives a new quantum contribution to the relativistic phenomenon of time dilation. The complicated oscillatory behavior of this contribution is due to the non-trivial L-dependence of numerous complex terms which are added together to give the relevant Bogoliubov coefficients (see the appendix of [45] for details). The main plot of Figure 2 gives the relative phase shift between the twins' quantum clocks.
In [56], the effect of non-inertial motion on the precision of the clock was investigated. This depends on the state in which the clock is initialized. The QFI for the phase of a Gaussian state was given in Equation 9. From this we see that, for φ ∉ (π 2, 2π 3) and a given purity, the precision of phase estimation increases with the real displacement parameter α and the magnitude r of the squeezing. For a given average particle number ⟨N ⟩, the squeezed vacuum Main plot: phase difference between the twins, using spatially-extended relativistic quantum clocks (h ∶= aL c 2 ). Left inset: time difference between Rob using a pointlike and a spatially extended clock, with (red) and without (blue) mode-mixing and particle-creation effects, as a percentage of the total time dilation between the twins. Right inset: percentage of the total time dilation between twins due exclusively to particle-creation. Figure taken from [45].
state is the best Gaussian state for phase estimation [58]. In Figure 3, the effect of non-inertial motion on the QFI FIG. 3. The change in the QFI (given as a percentage of its pre-motion value) after non-inertial motion with h ∶= aL c 2 , for a) a coherent initial state, and b) a squeezed vacuum initial state with ⟨N ⟩ = 1 (blue), ⟨N ⟩ = 5 (red) and ⟨N ⟩ = 10 (green). The phase accrued during each ta of acceleration was θa = π. The solid curves give the effect of mode-mixing alone, while the dotted and dashed curves incorporate the effect of particle creation for an initial phase of θ0 = 0 and θ0 = π 2 respectively. Figure taken from [56].
for coherent and squeezed vacuum states is depicted. In particular, one can see the separability of the mode-mixing and particle creation effects. Mode-mixing acts to decrease the QFI, and therefore the precision of the clock, more so for the squeezed vacuum than for the coherent state, though in the regime considered there is no point at which the coherent state gives a better clock than the squeezed vacuum. Particle creation, on the other hand, can either ameliorate or exacerbate this effect, depending on the initial phase θ 0 of the clock. For large ⟨N ⟩, the degradation due to mode-mixing dominates, but as ⟨N ⟩ decreases, one arrives at a regime where particle-creation effects dominate. For low enough ⟨N ⟩ and a careful choice of parameters one can even find cases where the QFI is improved as a result of the generation of the appropriate squeezing, though the set of such cases is relatively small. One can therefore conclude that the typical effect of non-inertial motion is to decrease the precision of the clock.
D. Generalizing to curved spacetime
For a pointlike observer, Einstein's equivalence principle allows us to equate free-fall with flat spacetime. However, for a system with some finite extent in a gravitational field, tidal forces will reveal the curvature of the spacetime. Likewise, one can equate a pointlike object at rest in a gravitational field with one undergoing some proper acceleration in flat spacetime, and one finds again that this equivalence breaks down for a system with finite extent. This is illustrated in [59], for example, where it is shown that a reference frame at rest in a uniform gravitational field is not equivalent to a uniformly accelerating one. Given these considerations, when seeking to apply the results discussed above to curved spacetimes, one can only invoke the equivalence principle in a limited sense. Here, we illustrate this in the Schwarzschild spacetime, though a similar argument can be applied to any static spacetime.
In the work discussed in previous sections, Rindler coordinates were used to represent the accelerated observer. One example of such coordinates, (η, χ), can be obtained from inertial coordinates (T, X) by the transformation Considering a set of observers fixed at each spatial Rindler coordinate χ, we obtain a particular profile of constant proper accelerations experienced by these observers: a R = 1 χ. Now consider the Schwarzschild spacetime corresponding to a mass M , in the usual Schwarzschild coordinates (t, r). The metric is given by In this case, observers at fixed r experience the constant proper acceleration [60] which is evidently different from the Rindler case. Since the clock has non-negligible extent, we cannot equate these two circumstances in general. Close to the event horizon at r = r s however, one can approximate the spacetime experienced by stationary Schwarschild observers using Rindler coordinates [60], giving an approximate equality between a R and a S , and in this case one can import the method discussed in Section III C into an investigation in curved spacetime.
To examine more general situations, we need to be able to describe the effect of general boundary motion through curved spacetime on the quantum state of the field, a problem whose solution was unknown until recently. We gave such a solution in [61], providing a method for describing the effect of a finite period of cavity motion through a static curved spacetime for a broad class of trajectories. This provides us with the means to explore the effect of gravity on the clock, namely how deviations from the proper-time prescription of relativity depend on the spacetime curvature, and how the precision of the clock is affected.
There remain, however, certain challenges. In the flat spacetime case, there was an unambiguous notion of length which could be adopted, determined by demanding that an observer accelerating with the clock measure a constant length. This results in a number of desirable properties, such as Born rigidity (a lack of stresses on the clock support system), constant radar distance (the distance as measured by timing classical light pulses), and constant proper length. In curved spacetime, however, such notions do not necessarily coincide, and there is no unambiguous generalization of Rindler coordinates. Fermi-Walker coordinates are a candidate for such a generalization, but it unclear if this theoretical construction is in keeping with the operationalism which we have until now adopted (for example by defining time as that which is measured by a clock). We are currently investigating different notions of length in curved spacetime, and how the choice of which notion to adopt affects the measurement of time.
The discussion above considered only static spacetimes. Now including the possibility of non-static ones, we can ask how the spacetime dynamics themselves affect the clock. This question brings with it an added complication: in order to associate a set of solutions to the field equations with particle modes, we require that the spacetime admits a timelike Killing vector field, which is by no means guaranteed for a nonstationary spacetime. Without such a vector field, there is an ambiguity in the concept of particles [50]. Nonetheless, there are some cases in which these issues can be overcome, such as in the usual calculation of particle creation due to an expanding universe [62,63], leaving us free to apply the quantum clock model.
E. Physical implementation
As noted in Section III B 1, the scalar field used in the clock model described above can represent light in an optical cavity (neglecting polarization), or the phonons of a BEC under certain conditions [49]. We only consider the former implementation here. Subjecting the mirrors of an optical cavity to the necessary non-inertial motion 2 is technically infeasible [66]. To circumvent this requirement, a novel solution was proposed in [67]; by placing a SQUID at one or both ends of a waveguide, one can create effective mirrors whose position is determined by the inductance of the SQUID, which is in turn controlled by an external magnetic field. Modulating the external magnetic field therefore allows the experimenter to control the position of this effective mirror. This setup is illustrated in Figure 4. By oscillating one mirror at a particle-creation resonance, it was used to observe the DCE for the first time [30]. In [45], the authors analyzed the feasibility of implementing the trajectory detailed in Section III C using a SQUID setup, concluding that the experiment would be challenging but possible.
IV. CONCLUSION
The results discussed above demonstrate both a deviation from the proper-time presciption of relativity when one considers a quantum clock with some finite extent, and a relativistic change in the quantum uncertainty associated with its measurement of time. Though these results are so far limited to flat spacetime, the main challenge to applying the model in curved spacetime, i.e. calculating the effect of motion through curved spacetime on the localized field, has now been overcome.
In Section I, we noted four problems arising in the overlap of quantum mechanics and relativity, which we wish to investigate. For clarity we repeat them here, before discussing each of them in turn: 1. finding the constraints imposed by quantum theory on clocks in GR; 2. reconciling the proper-time prescripton of GR with the impossibility of pointlike quantum trajectories; 3. investigating the validity of the clock hypothesis; 4. examining the applicability of the equivalence principle to a non-pointlike quantum clock.
To address the first problem, the quantum uncertainty of the clock measurement was quantified using the tools of quantum metrolgy, and in particular the Cramer-Rao bound. One finds that the change in precision due to relativistic motion depends upon the quantum state in which the clock was initialized, as one might expect. While some states were more robust than others, except for very particular circumstances, the motion had the effect of decreasing the QFI for all initial states, largely due to the mode-mixing. In the example considered, the more nonclassical the state, the greater its fragility with respect to the motion. A key goal of our ongoing work is to determine the effect of spacetime curvature on this analysis.
With regard to the second issue, we have attempted to move away from the proper-time prescription of GR in favor of an operationalist view, instead defining time as the result of a measurement performed on a quantum clock. This is in keeping with the Machean view that a physical theory should be based entirely on directly observable properties [68]. We have succeeded to some extent, in that the particles of the field do not follow well defined trajectories, and the clock-time is determined by the quantum evolution of the system and not simply the length along a curve. However, we are still bound by the proper-time view, as we must choose a classical observer whose proper time parametrizes the evolution of the quantum field. Furthermore, the phase of the field, whose measurement we take as time, has a definite, noncontextual value in this model, and so is not treated as a fully quantum observable. Nonetheless, this value gives a different clock readout from the corresponding proper time, and this difference is a highly non-linear function of clock size (see the insets in Figure 2), demonstrating the non-trivial effect of the clock's non-pointlike and quantum nature.
Concerning the clock hypothesis, we can clearly state that, with the clock model employed here, one finds effects beyond the instantaneous-velocity-induced time dilation (a finding which is corroborated in [14]). These effects modify both the time measured by the clock, and the precision of this measurement. This is a strong indication that, in a quantum theory of spacetime, where the latter is a measurable quantity, the clock hypothesis is not satisfied.
For the fourth problem, we discussed in Section III D the applicability of the equivalence principle in the current model. To fully investigate this, we first need to study trajectories in curved spacetime. One would expect the clock to exhibit a kind of "tidal" effect from the difference in gravitational field across the extent of the clock system, and for this to therefore depend on the clock size and the underlying curvature. However, it seems unlikely that this will allow us to address the issue of incorporating the physical insight of the equivalence principle into a non-pointlike quantum theory.
We now note some limitations of the model and our analysis. Firstly, the QFI is obtained by optimizing over all physically allowable measurements, with no regard to their accessibility to an experimentalist, nor to the available energy. A consideration of the latter, for example its effect on the spactime which the clock measures, could result in a greater clock uncertainty.
Another potential limitation is the possibility that the results discussed here are not fundamental, but in fact particular to the specific clock model. However, the model is rather general for QFTCS: we seek a localized field, which therefore demands some kind of potential, and we justify the use of boundaries (i.e. infinite potential barriers) by noting that the shape of this potential should not play a fundamental role. One can nonetheless make this more general, by instead considering some trapping potential, or by making the boundaries only reflective to certain frequency ranges. This results in a motion-induced coupling between trapped 'local' modes and global ones, the latter spanning the entire spacetime, and such a coupling would therefore likely reduce the precision of the clock. If this is true, the choice of boundaries used here can be seen as optimizing the clock precision over all possible localizing potentials.
As a final remark, we note that this clock model is, in effect, a quantum version of the common light-clock thought experiment often used to illustrate relativistic time dilation (including by Einstein himself [69]). | 2016-09-29T16:58:27.000Z | 2016-09-29T00:00:00.000 | {
"year": 2016,
"sha1": "06a1e0aa02e3f791ccf325821d1da7fc84b2eb95",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1609.09426",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "315189db4f7f308974c552d000c0484e5a73698d",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
258669578 | pes2o/s2orc | v3-fos-license | A Novel Approach for Prediction of Lung Disease Using Chest X-ray Images Based on DenseNet and MobileNet
Covid19 corona virus has caused widespread disruption across the world, in terms of the health, economy, and society problems. X-ray images of the chest can be helpful in making an accurate diagnosis because the corona virus typically first manifests its symptoms in patients' lungs. In this study, a classification method based on deep learning is proposed as a means of identifying lung disease from chest X-ray images. In the proposed study, the detection of covid19 corona virus disease from chest X-ray images was made with MobileNet and Densenet models, which are deep learning methods. Several different use cases can be built with the help of MobileNet model and case modelling approach is utilized to achieve 96% accuracy and an Area Under Curve (AUC) value of 94%. According to the result, the proposed method may be able to more accurately identify the signs of an impurity from dataset of chest X-ray images. This research also compares various performance parameters such as precision, recall and F1-Score.
Introduction
Many people in all over the world have been affected with one or more of the many types of lung diseases. Lung ailments leave the lungs more vulnerable to other physical issues and to the damaging effects of air pollution. As a direct consequence of this, the function of the lungs is reduced. It is essential to correctly diagnose the patient's condition in order to give them with the most effective therapy possible because lung disease is contagious. Chest X-rays are the primary tool radiologists utilize when trying to identify and diagnose lung conditions. Radiologists are able to diagnose and recognize diseases as well as a wide variety of additional disorders using chest X-rays. These conditions include bronchitis as well as infiltrations, atelectasis, pericarditis, fractures, and a great number of others [1]. It is estimated that more than 2 million operations involving chest radiography are carried out annually. This makes it the sort of illness examination that is regarded to be the most common worldwide. The signs and symptoms of COVID-19 are comparable to those of a number of other viral diseases, such as pneumonia and lung opacity. Therefore, at a same time, it was necessary to identify the specific diseases. Now that we got to this point, we decided to identify the classes as COVID, Pneumonia, Lung opacity, and Normal [2]. The Fig. 1 shows some classes of used X-ray images from used dataset for this research such as normal, covid and viral pneumonia.
The lung diseases images are very complex so the discovery and categorizing of the qualities of diseases in X-ray is a high difficult work. This is because of the complexity of the images and on the other hand, it is problematic to analyses the circumstances and disorder illnesses in X-ray pictures, and it was unable to attain any meaningful level. Deep learning procedures have been used to identification of these type of diseases in lungs. There are a large number of works that have been carried out in the field of lung diseases prediction using different deep learning techniques. The identification of lung disorders such as tuberculosis (TB), chronic obstructive pulmonary disease (COPD), and pneumonia are incorporated into neural networks for the diagnosis of cancer [3].
Deep learning approaches demonstrated the high efficacy with good performance in categorization of images by using a variety of machine learning approaches. It is also now Fig. 1 Classes of chest X-ray images being utilized to automate the diagnosis of a number of different diseases [4,5]. In addition, the term "machine learning" is used to refer to models that are capable of learning and making judgments based on huge volumes of data samples. By performing calculations and making predictions based on the information that is being received, deep learning is able to complete tasks that would normally require the intellect of a human being. Some examples of these tasks include voice recognition, translation, and visual perception [6,7].
Researchers are pooling their resources in an effort to invent tools that will make the work of radiologists and physicians easier. A number of different AI techniques have been tried so far in the quest to find the most effective network for the field of radiology and the processing of medical images [8]. In the field of medical image processing, Convolutional Neural Networks (CNN) has demonstrated very promising results in the areas of image classification, localization, and segmentation [9].
This study presents the robust method for classifying and prediction of lung diseases using MobileNet and DenseNet application on chest X-rays as either be appropriate to a healthy or infection in the lungs. The first section described the introduction of lung diseases and deep learning method used to predict such types of diseases. Section two describe the previously related work in the literature. Third section deals with the used methodologies in the study. Section forth explain the proposed method and the result and discussions are presented in fifth section and last section sixth concludes the study.
Literature Review
Babukarthik et al. [10] and other researchers gave limelight to COVID-19 Disease as the cases of this disease is exponentially rising day by day. So, from their across the need of faster and economical solution for the medication of COVID-19. Here they want to predict by viewing all the CXR Images Samples of lungs and let it know that the particular person is suffering from COVID -19 or not. For this they worked upon Genetic Deep Learning CNN with calculating its fitness and also compare it with other models as VGG16, Squeeze Net, resenet50, Densenet-121, and resenet18.After the comparison with other methods, they found that their model performed well as per their analysis.
Minaee, Shervin, et al. [11] talked about COVID 19 and their effects in different parts of the world. Detecting COVID-19 is one of the crucial steps in medication of COVID 19. The faster the Detection, faster will be the medication and lesser will the cases of COVID -19. They collected the dataset comprising of more than 5000 Chest X-RAY Images and then tested with different deep learning models as Squeeze Net, DenseNet-12, resNet50 and resNet18. To get the results they divided the dataset into test and train dataset which comprises of 2000 and approx. 3000 chest X-RAY Images. The rate of specifity came out to be around 90% and sensitivity rate was around 98%. To get the results detailed analysis of experiment was done. They mainly followed transferred learning approach. It is mainly used when we have limited samples. This is generally done for extracting different features. As the data set was limited, they tuned their last layer of Convolutional Neural Network to get all the feature extracted and get more accurate results. They checked the models with 4 different parameters as Specificity, Sensitivity, AUC and ROC. As the specificity and sensitivity of all the models came out to be around 90% so this comparison was quite successful and in future, they will work upon larger datasets to get more accurate results.
Cohen, Joseph Paul, et al. [12] and other researchers decided to work on COVID Detection on early stages because at that time the HOTSPOTS in every country were increasing at exponential rate. At that time no vaccination came so from there arose major need to detect this and work upon the medication of this disease. This was mainly done to help doctors in different countries to detect COVID in early stages so that the hotspots are reduced. To make their dataset for research they collected the chest X-RAY IMAGES from more than 26 countries. As the dataset was huge so there were different bias and problems coming in their way so they applied the technique of leaving one continent or one country out. For test and training dataset they took out different units of groups of countries or continents to get the accurate results. They mainly predicted the severity of COVID-19. The technique used was Logistic Regression and Linear Regression. From they made first dataset of Chest X-RAY images with more than 20 outcomes.
Panwar, Harsh, et al. [13] and other researchers worked upon Early Detection of COVID-19 using various Deep Learning Techniques as it was most successful with other techniques available in machine learning for prediction. Mainly they focused on main approach that is Color Visualization. The COVID-19 recognition followed by considered 3 dataset named Chest X-Ray Images of COVID-19 Disease, COV-2 CT-scan and chest x-ray images of pneumonia. They also estimated the patterns in Pneumonia and COVID-19 as the symptoms of both the diseases are quite similar. Also, for color Visualization they have focused on Grad CAM based color visualization technique.
Heidari, Morteza, et al. [14] and other researchers have focused on COVID-19 Disease as it is type of infectious disease so the detection of this is very crucial in early stages. It will also help to control the spreading of this disease in the efficient and effective way possible. Here also CHEST X-RAY dataset is used for prediction. In this they tested Novel CAD (Computer-aided Diagnosis) scheme. The main classes which they have considered is Pneumonia (Non-COVID), Pneumonia (COVID) and Normal (Non -COVID and NO Pneumonia). After applying this scheme into the model, the accuracy comes out to be as 94.5% with around 98% of specificity and sensitivity.
Alazab, Moutaz, et al. [15] and other researches have focused on COVID because this disease is infectious disease. It was rapidly spreading at that time. To get more accurate results they get 1000 real Chest X-RAT Images. They have mainly focused on Deep Learning Technique and its types as the autoregressive integrated moving average model, prophet algorithm and long short-term memory neural network. From this they predicted the number of deaths, recoveries and confirmation in next upcoming 7 days. The prediction of all Chest X-RAY Images was done in non-coastal areas as Australia and Jordan. They also informed that this model can used where there are higher number of infectants. The prediction of death, recovered and confirmed cases of this was done in form of graph for India, Australia and Jordan.
Tahia Tazin et. al. [16] present the research that was conducted using a convolutional neural network, also known as CNN, to identify brain cancers from X-ray pictures. As a result of the extensive amount of research that has been done in this area, the model that is being presented has an emphasis on improving accuracy while utilizing a transfer learning technique. The performance was evaluated based on how accurately the classifications were made. An accuracy of 92% was achieved by MobileNetV2, an accuracy of 91% was achieved by InceptionV3, and an accuracy of 88% was achieved by VGG19. When compared to various other networks, MobileNetV2 has provided the highest level of accuracy. These accuracies assist in the early detection of tumours, allowing for treatment to begin prior to the onset of any harmful physical effects, such as paralysis or other impairments.
Abdelbaki Souid et. al. [17], presented a method for the categorization and diseases identification in lungs based on frontal thoracic X-rays images using an altered version of the model MobileNet V2. They examined transfer learning in conjunction with metadata leveraging and employed the NIH database. AUC measurements were used as the primary comparison tool, and the analysis focused on the differences between classifiers. Authors find an AUC of 0.81 on average and 90% of accuracy. They conclude that resampling the dataset results in a significant improvement to the performance of the model.
As was cited earlier, a huge figure of people, especially youngsters, are affected by lung diseases such as covid or pneumonia. These diseases are combined with the lack of availability of appropriate medical facilities. It is essential to make a prompt diagnosis of this lung disease in order to achieve a full recovery from the illness. The most common method of diagnosis is the examination of X-ray scans; however, the accuracy of this method is contingent on the interpretative skills of the radiologist and is frequently contentious among radiologists.
To the best of our knowledge, the majority of the previously mentioned methods in the related work focused on developing a genetic algorithm, CNN model, or deep learning algorithm for the classification of lung disease. Nevertheless, the purpose of presented work was to construct a trainable model for low power devices; to solve the problem of robust prediction of lung disease; and to solve the problem of disappearing gradients; and therefore, this study was implemented.
The following are the major contributions of this research work.
• A dynamic feature propagation that was implemented with the MobileNet and DenseNet, proposed for enhancing the performance of the native CNN models for lung disease classification, was developed. • The calculation of performance matrix for proposed method were determined by five evaluation metrics: accuracy, precision, recall, f1-score, and AUC. • The results of the proposed model were compared to a native CNN technique and a MobileNet method. The results of the proposed model were found to be higher to those of the native methods, which indicates that the method is feasible for usage in the practical sector.
Methodology
In this section, the methodology is provided that is concise explanation of the applied in order to accomplish the aims of the study. Using the Keras framework proposed the categorization and forecast of lung illnesses in chest X-ray images. In preparation for work that was being proposed, we had located and categorized 40,000 x-ray pictures taken from online platform. In this work, CNN model, the MobileNet model, and additional Densenet layers are utilized for the purposes of predicting and classifying chest and thoracic disorders based on X-ray pictures of the chest. MobileNet is an association scheme which is extra suitable for devices that used minimum power consumption. All of the images of chest x-ray were downsized to a resolution of pixels before being passed into a model that had already been pertained to perform feature extraction. Every image was brought up to the standards of the pretrained model once it was normalized. The following methodologies are applied over the work of this investigation.
Depthwise Separable Convolution
The MobileNet consists of two stages: the depthwise convolution and the pointwise convolution. Both of these stages are performed after the down sampling in each feature map has been completed. The first depthwise convolution, which is responsible for the filtering stage, is currently being performed. The combining stage, which involves the execution of the second pointwise convolution [18]. When it comes to the number of input channels, depthwise convolution only put on to a solo convolution for one time. The depthwise layer is subjected to pointwise convolution in order to generate a linear output combination, and the MobileNet incorporates 1X1 filter convolution into its operations. The MobileNet architecture replaces a single 3X3 convolution with ReLU and Batch Normalization in place of it. This is the primary distinction among the MobileNet and that of conventional CNN. MobileNet did the convolution in two different ways: first, it used a depthwise 3X3 convolution kernel, and then it used a pointwise 1X1 convolution in each of the input channels individually [19].
The conventional method of convolution consists of two stages: the initial stage is filtering, and the another stage is merging inputs into a new set of outputs. Because of this factorization, the model's computation and size have been significantly reduced. The depthwise separable convolution makes it easier to separate the phases of filtration and combining, and it also has further applications that lower the size and complexity of the model. These are the primary variables that contribute to a considerable reduction in the amount of work that needs to be done.
Dense-MobileNet utilises depthwise separable convolution in its completeness. This block consists double layer of convolutional in addition to a depthwise. The input data of the first layer are the gathering of output produced by the previous depthwise separable convolution layers. As can be seen in Fig. 2, the dense block structure that is represented here only possesses a single dense link [20].
The convolution layer receipts the input data as: (1) D fm × D fm × M Fig. 2 Dense-MobileNet model [20] Where as fm is feature map and fm generates D fm × D fm × N. N is the number of output channels independently. D fm represents the spatial height and breadth of each feature map input. The input network number is indicted by M in terms of depth. D fm represents the output 3-D height & width, while N represent the count of output channel. The basic convolutions are specified by the convolution kernel D ke × D ke × M × N where D ke is the kernel's spatial dimension. Computational Cost for the standard convolution is: MobileNet can employ different width factors and input layer sizes to reduce device inference costs and explains the interface between output kernel size D ke × D ke and feature map D fm × D fm and creates depthwise convolution to break the interaction. In our proposed model, filters and grouping are divided to use depthwise separable convolution to reduce computational cost. MobileNet is a CNN class that Google open-sourced, so we can use it to train small, rapid classifiers [21,22].
Densenet
A DenseNet is a specific kind of convolutional neural network that makes use of thick connections between layers. These dense connections are made possible by Dense Blocks, which connect all of the network's layers directly with one another. The DenseNet has different variants, such DenseNet-121, DenseNet-160, DenseNet-201, etc. The numbers represent the number of layers that are present in the neural network [23]. The Densenet number 121 is defined as 5 + (6 + 12 + 24 + 16)*2 where 5 convoluation and polling layer, 3 Transaction layer as (6,12,24), 1 classification layer (16) and 2 denseblock [24].
Preparation for training model
MobileNet is a deep neural network that was used in this work alongside the machine learning supervised procedure. For prediction we have arbitrarily distributed the complete dataset into 2 sets which is training dataset (70%) and testing dataset (30%). After this the images were embedded by using image embedding widget. Here we used Inception V3 Embedder. After embedding all the images, these images were passed onto different models so that they can learn and classify all the Chest X-ray images into COVID, Pneumonia, Lung Opacity or Normal.
The datasets are loaded and randomly transformed into images by utilising the TensorFlow and Keras packages during the preparation processes. For the training, the maximum number of photos per batch is 64. First, we provided the MobileNet with some inputs (of size 224 × 224), which it used as a base. Additionally when working with spatial data, the pooling layer 2D (2X2) filter is applied to help minimise the feature dimension. The all neurons fully connected with the dense layer that are associated to the subsequent output. Dropout layers consist of neurons that are chosen at random. Next, the ReLU and Sigmoid are applied as activation function and optimization methods are used to categorise the images.
Proposed Method
The proposed approach is solved the problem of robust prediction of lung disease and problem of disappearing gradients with requirement of fewer parameters to train the model. The dynamic feature propagation that was implemented with the MobileNet and DenseNet and is responsible for ensuring the information is transferred without any interruptions. In order to conduct the experiment, the version 201 of DenseNet is utilized i.e. DenseNet201. Because it is loaded with chest x-ray images in the form of an array, we will use a function from the image library to convert the image so that it can be used with the array. In the process of image analysis, the pre-processing phase is known essential component of overall workflow. It has the ability to improve the original image while simultaneously reducing noise or unnecessary elements. In this research, we utilize the fundamental processing techniques on the image dataset that we have. A CNN was incorporated into the process of picture classification. The CNN model is capable of providing accuracy comparable to that of a human in the classification of different images. One way to think of a convolution network is as a series of convolution layers connected together, each of which has pooling layers and bunch normalization procedures. The numerous benefits that CNNs offer have led to their widespread use in the image processing methods.
The proposed approach used an 8-layers on convolution neural network. Two Conv2D consists of a convolution layer with 32 filters that each have measurements of 3 by 3, as well as the ReLU activation function. The two MaxPooling layer of 2 by 2 size. The Dropout layer that helps to prevent overfitting by setting input as zero in an arbitrary manner with a predetermined rate of frequency at every stage throughout the training phase. In this experiment, the rate is equal to 0.8. The data flattening layer (Flatten) used with no additional parameters. The Dense layer apply with the activation function ReLU being used, and the unit parameters being equal to 128. Final output Dense layer used with unit parameters 3 and the activation function used as softmax. The Fig. 3 shows the overall flow of the proposed approach.
After the categorization of all chest X-ray images, we apply the DenseNet with four dense blocks with equal layers of 224 × 224 input images blocks. On the input images, a convolution that has a total of 16 distinct channels for output is carried out before moving on to the first dense block. In order to maintain a constant size for the feature map while using convolutional layers with a kernel size of 3 × 3, with 0 padded on every side. We employ 1 × 1 convolution followed by 2 × 2 average pooling using Relu as the activation function. After final dense block completion, a pooling average is carried out, and a classifier is attached i.e. softmax. The overall process done with the DenseNet configuration with hidden layer of MobileNet configuration as 'adam' used for optimizer, the loss function used as 'sparse_categorical_crossentropy' and 20 epoch for each 64 batch size. This proposed approach experiments exhibited some encouraging outcomes. We evaluated the algorithms using four parameters: accuracy, precision, recall, and F1-score. Using a confusion matrix, the parameters were determined.
Results and Analysis
The prediction results was calculated on the based on numerous variables as Area under ROC (AUC), F1-score, accuracy, precision, and recall. Area under ROC is used calculate the performance of the prediction model. If we consider our workflow, Table 1 shows the obtained result of performance matrix for CNN, MobileNet and proposed approach for these criteria. Precision takes into account the ratio of all right positive classifications that are true positive tuples to all of the cases that were really anticipated to be positive. The ratio of the number of correct positive tuples out of the total number of cases that had true positives. Precision and recall are both included in the function designated as F1. This allows us to evaluate the proportion of precision to recall that exists.
The comparison of accuracy of these obtained result and its graphical representation for training and testing dataset are also shows in Fig. 4. We have obtain higher testing accuracy The Training and Validation Accuracy and loss Graph for the used dataset is shows in Fig. 5. It is clear from figure that the training and testing loss is decrease with every epoch of neural network. During the training period, a total of 20 epochs were run in order to calculate the accuracy and model loss (as shown in Fig. 5). It describe about the epoch with which to use the trained model weights at the inferencing stage. The analysis of the test performance was generated through the receiver operating characteristic (ROC) [25]. In Fig. 6 shows the ROC curve of native CNN method. This ROC curve for native CNN method provides the total 100% of area for Covid, 86% area for normal and 92% area for viral pneumonia from the chest x-ray dataset images.
Similarly the Fig. 7 shows the ROC curve of MobileNet. This graph of MobileNet offers 100% region for Covid, 90% region for normal and 92% region for viral pneumonia as on the chest x-ray dataset. The ROC curve illustrates the compromise that must be made between sensitivity and specificity. Classifiers that produce curves that are located closer to the top left corner demonstrate a higher level of performance.
The ROC curve display in Fig. 8 represent the ROC of proposed method. This proposed method ROC curve indicate 100% region for Covid, 89% region for normal case and 93% region for viral pneumonia from the used x-ray dataset. Among the 3 categories of x-ray image the proposed method provides high ROC to indicate Covid and viral pneumonia case.
In addition, Table 2 analyses the similarities and differences between our proposed approach based on DenseNet and MobileNet method and four other state-of-the-art models that have recently been research in the scientific literature to identify and categories lung disease. The data set and methodology utilized by these research are displayed in the table. Due to differing testing techniques, the accuracy of these research cannot be straight compared but the statistics that have been presented make it abundantly evident that our proposed model is validated, and it produces classification results that are fairly comparable to those of other models. This is the case despite the fact that direct comparisons of the reported accuracy of these studies are not possible.
Conclusions and Future Work
The deep structure of MobileNet and DenseNet, which makes advantage of the mining capability of prediction of lung disease and has resulted with higher testing accuracy rate. The presented method is mainly used the Convolution neural network based on chest x-ray classification of three categories of infection i.e. Normal, COVID-19, and Viral Pneumonia. The proposed approach used an 8-layers on this convolution neural network to the chest X-ray images using a hybrid method of MobileNet and DenseNet approach. The test accuracy, F1-score, precision, recall and ROC AUC curve are used in the analysis of the chest x-ray images. In our comparison investigation, the proposed method showed good classification performance, which supports its potential for application in clinical settings for computer-assisted diagnosis of lung disease. Study results determine the Precision value is 1 for Covid and normal case and 0.79 for Viral Pneumonia, the Recall value is 1 of normal and Viral Pneumonia and 0.69 for Covid. The F1-Score is 1 for normal, 0.79 for Covid and 0.85 for Viral Pneumonia. The proposed approach achieved the high accuracy with 96% and ROC AUC score with 0.94. In Accuracy, the ratio of all the correct classifications which includes all the true positives and all true negatives from all the cases. According to the acquired results we can present that the proposed method yielded highly accurate results in both the diagnosis of the condition and the classification of the chest X-ray images, demonstrating its high recognition rates to compare with native CNN and MobileNet method. It also has a high Precision and F1-score, which means it produces fewer false negatives. This is essential for the prevention of infection because a COVID-positive patient who is incorrectly identified as COVID-negative will considerably spread the virus to other patients. This research work also intended to construct a trainable model and modify low power devices. Future work may include exploring additional model structures, novel label dependency architectures for detecting the diseases in noisy images. | 2023-05-14T15:19:01.727Z | 2023-05-12T00:00:00.000 | {
"year": 2023,
"sha1": "60a3254a887d11a562e4180f43ef96e3bfd65903",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "ffda65dcfbf14f0a6974935b57431799527a58ee",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": []
} |
84183987 | pes2o/s2orc | v3-fos-license | Hydrogen/deuterium exchange-mass spectrometry analysis of high concentration biotherapeutics: application to phase-separated antibody formulations
ABSTRACT High concentration biotherapeutic formulations are often required to deliver large doses of drugs to achieve a desired degree of efficacy and less frequent dose. However, highly concentrated protein-containing solutions may exhibit undesirable therapeutic properties, such as increased viscosity, aggregation, and phase separation that can affect drug efficacy and raise safety issues. The characterization of high concentration protein formulations is a critical yet challenging analytical task for therapeutic development efforts, due to the lack of technologies capable of making accurate measurements under such conditions. To address this issue, we developed a novel dilution-free hydrogen/deuterium exchange (HDX) mass spectrometry (MS) method for the direct conformational analysis of high concentration biotherapeutics. Here, we particularly focused on studying phase separation phenomenon that can occur at high protein concentrations. First, two aliquots of monoclonal antibodies (mAbs) were dialyzed in either hydrogen- or deuterium-containing buffers at low salt and pH. Phases that separated were then discretely sampled and subjected to dilution-free HDX-MS analysis through mixing the non-deuterated and deuterated protein aliquots. Our HDX-MS results analyzed at a global protein level reveal less deuterium incorporation for the protein-enriched phase compared to the protein-depleted phase present in high concentration formulations. A peptide level analysis further confirmed these observed differences, and a detailed statistical analysis provided direct information surrounding the details of the conformational changes observed. Based on our HDX-MS results, we propose possible structures for the self-associated mAbs present at high concentrations. Our new method can potentially provide useful insights into the unusual behavior of therapeutic proteins in high concentration formulations, aiding their development.
Introduction
Over the past few decades, monoclonal antibodies (mAbs) have grown significantly as treatment strategies for cancers and chronic diseases. 1,2 For certain clinical indications, frequent high therapeutic doses (>1 mg/kg) are often required to achieve a desired efficacy. 3 Conventionally, such protein therapeutics are delivered via intravenous (IV) administration in order to take advantage of the improved bioavailability and the greater control offered by the method during clinical development compared with other approaches to drug administration. 4 Despite the wide use of IV administration, large doses of biopharmaceuticals can take a long time to be delivered IV and often require frequent hospital visits, leading to substantial cost increases for patients and health-care providers. Subcutaneous (SC) injections can serve as an alternative drug administration strategy, allowing for patient self-administration and reducing overall costs, but very high therapeutic concentrations (>100 mg/mL) may be required to deliver high doses. 5 Despite the advantages associated with SC administration, development of mAbs formulated at such high concentrations presents many challenges in processing, manufacturing, storage, and delivery, mainly owing to the non-ideal behaviors of highly concentrated proteins, which are quite different from those observed for dilute solutions. Unusual protein behaviors at high concentrations often stem from protein self-association, leading to undesired solution properties, such as increased solution viscosity, opalescent solution appearance, and liquid-liquid phase separation. 3,[6][7][8][9] These unwanted properties can affect drug efficacy and raise safety issues. Liquid-liquid phase separation (LLPS) poses an especially challenging array of problems in the context of biopharmaceutical development efforts. [9][10][11][12][13][14] LLPS is a thermodynamically driven process, during which a homogeneous protein solution forms two distinct phases. The less dense phase typically exhibits a lower protein concentration, whereas the higher-density phase is protein-enriched. LLPS is usually induced by antibody self-association at low temperatures, resulting in protein concentrations for the two phases that are dependent on both temperatures and buffer conditions. LLPS represents a metastable state of the protein solution and can be reversed upon changes in temperature or formulation environment. Many studies have been carried out to investigate the manner in which LLPS phase diagrams are affected by buffer composition, pH, ionic strength, and various excipients. 10,11 Characterization of the two protein phases has been performed using various analytical and biophysical techniques, such as size exclusion chromatography (SEC), ion exchange chromatography (IEX), analytical ultracentrifugation (AUC), dynamic light scattering (DLS), turbidity, and viscosity tests. 13 However, most of the abovementioned techniques can only be performed on diluted solutions, and thus fail to capture any concentration-dependent properties of the two phases. Therefore, analytical techniques that require minimal sample manipulation and dilution are needed to better understand the structural consequences of LLPS or highly concentrated proteins that are of relevance to biopharmaceutical development efforts.
Hydrogen/deuterium exchange-mass spectrometry (HDX-MS) is a versatile tool for the assessment of protein conformations, dynamics, and interactions and is now increasingly applied to mAb analysis. [15][16][17][18][19] However, traditional HDX-MS workflows are typically initiated through the exchange of labile backbone amide hydrogens by diluting protein samples into a D 2 O-containing buffer. 19 Thus, the use of HDX-MS has been limited for analyzing protein samples at very high concentrations. Recently, HDX-MS workflows designed for the analysis of high concentration protein samples have been described. 20,21 For example, a recently described HDX-MS methodology that relies upon reconstituting lyophilized mAb powders in a deuterated buffer was able to characterize mAb structures at 60 mg/mL. 20 This approach identified protein-protein interfaces associated with a concentration-dependent reversible selfassociation. While lyophilization combined with HDX-MS can provide protein structure information in a dilution-free mode, the workflow introduces a reconstitution step and is limited to those buffers amenable to the lyophilization process. To overcome these limitations, a dialysis-coupled HDX-MS strategy was recently reported for mAb analysis, in which passive dialysis microcassettes are used for HDX labeling. 21 While this approach successfully sampled high concentration (200 mg/ mL) IgG4 formulations for comparison with low concentration (3 mg/mL) samples, the long timescales needed for dialysis likely render many known modes of protein motion inaccessible to the technology.
Here, we describe a novel HDX-MS strategy for assessing protein structures with no manipulation of sample concentration. We begin by preparing two mAb samples at the same concentration: one dialyzed to hydrogen-containing buffer and the other to deuterium-containing buffer under the same conditions. HDX reactions are then initiated by mixing the two protein fractions in a 1 to 1 ratio, followed by MS analysis of either intact protein or peptide level. Since both H 2 O and D 2 O fractions contain the same concentration of protein, no dilution occurs after mixing. Specifically, we applied this HDX-MS approach toward the comparative characterization of mAb samples in the case of LLPS. A humanized IgG4 monoclonal antibody (referred to as "Mab4") was studied as a model system. Our global HDX-MS data revealed less deuterium uptake for Mab4 in the high-density phase compared to those in the low-density phase, suggesting the prevalence of less dynamic protein conformations within the former phase. A statistical analysis of our HDX-MS results acquired at the peptide level identified mAb regions exhibiting significant decreases in HDX for mAbs present in the high-density phase. We conclude by proposing a molecular mechanism that describes our phase-separated IgG4 samples.
DSC and DLS measurements reveal concentration-dependent mAb structures
As reported in the literature and observed in our buffer screening experiments (see Supporting Information), LLPS is a reversible process for high concentration mAb samples. When the temperature is higher than the critical temperature (T C ), the two phases merge and reform one homogeneous phase. 10 Similarly, if the highly concentrated solution is diluted to a concentration lower than the concentration of upper phase, then phase separation will not occur. Despite previous studies, many questions remain surrounding the structures of phase-separated mAbs. Specifically, these questions include those focused on whether proteins possess any specific structural characteristics that favor one phase over another, and if proteins can adapt their conformations upon phase separation. In addition, it is not clear if proteins are able to maintain structural properties acquired during phase separation at high concentration following sample dilution. In an effort to answer some of these questions, we assembled an array of biophysical tools to study Mab4 under LLPS conditions. We performed differential scanning calorimetry (DSC) measurements to characterize the thermal stability of phase-separated Mab4. The Mab4 sample was prepared at a concentration of 50 mg/mL and separated into two clear phases while incubated at 5ºC. Samples from the two phases were taken and diluted to 1 mg/mL prior to DSC measurements. As shown in Figure 1, the two samples exhibit highly similar melt temperature profiles, consisting of two major transitions taking place around 68.0ºC and 77.8ºC. The nearly identical DSC profiles recorded for the two Mab4 samples strongly indicate that the mAbs occupy similar structures regardless of the phases in which they are present during phase separation process or that any phase-dependent structural changes are not retained following the sample dilution step necessary for DSC. DLS measurements for diluted Mab4 samples produced results similar to our DSC experiments. A diffusion interaction parameter (k D ) can be empirically determined by measuring the diffusion coefficient (D) for mAbs as a function of protein concentration based on DLS data. Within the concentration range from 0.5 mg/mL to 3 mg/mL, extracted k D values are −52.1 mL/g and −50.8 mL/g for Mab4 in the lowand high-density phases, respectively. Negative k D values represent attractive intermolecular interactions, suggesting a tendency for Mab4 to self-associate and aggregate independent of the protein concentration.
An HDX-MS workflow for phase-separated mAb samples at high concentration In order to assess protein structures directly at high concentration, we designed an HDX-MS workflow that can be performed in the absence of dilution. The experimental procedure in the case of LLPS is shown in Figure 2, and this approach can be applied similarly to any studies that require direct conformational analysis of protein samples at high concentration. Generally, the sample preparation begins with overnight dialysis of protein into the target formulation. (Figure 2(a,b)) Dialysis is performed using a 10 kDa molecular weight cut-off (MWCO) MINI dialysis device that can hold 2 mL maximum sample volume, placed in a 50 mL conical tube containing the dialysis buffer. The conical tube is gently shaken at~200 rpm to avoid agitation-induced aggregation. Dialysis buffer is changed twice during dialysis to reach full equilibrium. Two dialysis buffers comprising the same chemical formulation are prepared, of which one was in H 2 O solvent and the other in D 2 O. Following the dialysis protocol described above, two fractions of the protein samples are buffer exchanged into the H 2 O buffer and the D 2 O buffer separately. Meanwhile, protein in the D 2 O buffer undergoes HDX. The samples are incubated for at least one week to ensure that the exchange reaches equilibrium. (Figure 2(c,d)) Following sample preparation, HDX is initiated by mixing H 2 O-buffered sample with that in the D 2 O buffer using a 1:1 ratio. (Figure 2(e,f)) Because the D 2 O buffer also contains protein, the overall protein concentration of the sample analyzed by MS can be maintained. Mixed samples are then subjected to MS analysis at the intact protein or peptide levels. (Figure 2(g)) One of the advantages of this workflow over previous approaches is the ability to study the effect of LLPS and other solution phase properties on protein structure at high concentration. HDX-MS of Mab4 samples prepared at a concentration of 50 mg/mL were dialyzed into the 10 mM citrate buffer with 50 mM NaCl at pH 6. Once dialysis was complete, Mab4 solutions were stored at 5ºC to bring about phase separation. Following LLPS, protein concentration was measured to be 28 mg/mL for the lower-density phase and 150 mg/mL for the higher-density phase. Previous reports have demonstrated that the impact of increased solution viscosity on the rate of HDX is negligible. [22][23][24] Thus, we assumed that a direct comparison of HDX profiles could be performed for Mab4 in the two liquid phases observed in our samples.
Comparative HDX-MS analysis of intact mAbs
Intact Mab4 masses were recorded for samples following HDX to provide an overall picture of antibody structural changes as a function of phase. For each mAb charge state, two resolved peaks were detected at the first reaction time point (100 s), with the lower mass species corresponding to mAbs that were incubated in hydrogen-containing buffer, with the higher mass species having fully exchanged in the presence of D 2 O. As HDX labeling time is increased, fully exchanged mAbs back-exchange with H 2 O, while unexchanged mAbs undergo the forward HDX reaction, resulting in the coalescence of the separated features recorded in initial mass spectra. Deconvoluted masses were used in our data analysis workflow to track the amount of HDX achieved experimentally.
To capture our protein level HDX results, we plotted the deuterium uptake level against HDX labeling time to generate an "exchange-in" profile for Mab4 sampled from the lowerand higher-density phases prepared in the H 2 O buffer (Figure 3 (a)). We observed that Mab4 sampled from the lower-density phase within our samples exhibits larger mass shifts compared to Mab4 taken from the higher-density phase across all labeling time points, indicating increased flexibility and surface accessibility for Mab4 molecules in the lower-density phase. We also monitored HDX back-exchange or the "exchange-out" profile for our data, focusing on samples prepared in the D 2 O buffer, and observed a different trend (Figures 3(b) and S2 (b)). Critically, the observed hydrogen uptake level is almost identical for mAbs sampled from the two phases, suggesting similar protein conformations and dynamics regardless of protein concentration (Figure 3(b)). The observation indicates that substituting the readily exchangeable hydrogens with deuterons as a starting point for our experiments may induce structural changes in the antibody. Our current HDX-MS workflow, however, cannot unequivocally identify the underlying cause of such structural effect of deuteration. Thus, we focused primarily on Figure 3(a) when constructing our LLPS protein structure models below.
HDX-MS at the peptide level defines local conformational differences in phase-separated mAbs We probed local conformational differences in phaseseparated Mab4 samples using bottom-up HDX-MS. HDX labeling was carried out over five time points: 30 s, 100 s, 1000 s, 2000s, and 10000 s. In total, we detected more than 100 peptides reproducibly during our bottom-up HDX-MS analysis, producing a sequence coverage of 77.4% for the Mab4 heavy chain and 100% for the Mab4 light chain. Similar to our intact mass measurements, a bimodal distribution of isotopic peaks was typically observed for all peptides detected after the HDX reaction. However, not all Figure 1. DSC thermograms of Mab4 from upper phase (orange) and lower phase (blue). Mab4 samples from the low-and high-density phases were diluted from 28 mg/mL and 150 mg/mL to 1 mg/mL, respectively. Protein denaturation was induced by ramping temperature to 90°C at 1°C/min rate. deuterated species were well resolved, owing to the smaller mass differences and relatively wider isotopic distributions exhibited by small peptides upon deuteration in comparison to protein data, where average mass data is collected. Such bimodal distributions in m/z posed challenges in processing our HDX data, which were largely overcome by using Mass Spec Studio 25 to produce an integrative data processing workflow. Figure 4 shows representative selection of 12 peptides, covering all Mab4 domains, where deuterium uptake is tracked as a function of labeling time. In general, most peptides detected from the higher-density phase show lower deuterium uptake levels compared to those extracted from the lower-density phase. For some of these peptides, deuteration differences observed between the two phases are consistent across all labeling time points, whereas some peptides display noticeable trends in their relative deuteration levels. To evaluate the significance of the observed differences, we used a statistical analysis module within Mass Spec Studio to further analyze our peptide HDX-MS results, outputting mass difference values across peptides and evaluating these changes against the mean variation in our samples to assess the statistical significance of the changes in deuterium incorporation detected. As shown in Figure 5, a global view of our statistically processed data was achieved by plotting the mass differences of all peptides and projecting gray dashed lines that represent a two standard deviation threshold (±0.48 Da) identified by our analysis as a minimal difference value to assign significance to the detected change at the 95% confidence interval. At labeling times of 30 s and 100 s, almost all identified peptides exhibit decreased HDX in the high-density phase, of which about 40% represent significant changes. We also note an apparent decrease in differentiated exchange patterns at longer labeling time points, likely due to false negative peak identifications caused by increased mass overlap due to large absolute levels of HDX. As such, these longer time points are not considered in our detailed structural analysis below.
In order to begin building a molecular model of mAb conformational changes that occur during LLPS based on our data, we mapped the HDX-MS results onto a homology model of Mab4 built from an IgG4 crystal structure. Figure 6 shows significant HDX differences mapped on the homology model at time points 30 s and 100 s. Peptide segments within Mab4 where we observed significantly decreased deuterium uptake in the high-density phase are colored blue, gray-colored areas represent peptide segments showing no significant differences between the two phases, and green regions indicate those missing from our dataset. Though we did not achieve complete sequence coverage for Mab4, the peptides identified in HDX-MS experiments comprehensively cover all Mab4 domains, giving us a detailed view on LLPS-associated structural changes. In general, we observe peptides that exhibit significant changes in deuterium uptake across all regions of the antibody, with most of the detected shifts in protein flexibility and/or accessibility present in the antigen-binding fragment (Fab) and in the Fc region proximal to the site of N-linked glycosylation.
Discussion
Understanding the behavior of therapeutic proteins within high concentrations is of interest due to the growing demand for such high concentration formulations as treatment options. However, a lack of dilution-free analytical techniques poses many challenges in characterizing concentration-dependent protein properties. LLPS is of particular concern during the discovery and development of therapeutic proteins. Based on our initial LLPS screening experiments, phase separation was observed for an IgG4 prepared at specific ionic strength, pH and at low temperature, where a less dense phase containing lower concentration protein and a higher-density phase consisting of concentrated protein were formed. Biophysical profiles recorded at low concentration were highly similar for Mab4 samples taken from the two separated phases, suggesting that any differential structural properties in LLPS might not be preserved during sample dilution. In contrast, using our new HDX-MS workflow, we were able to carry out the deuterium labeling reaction directly at high protein concentration by mixing the protein sample with D 2 O buffer containing the identical protein.
We observed lower deuterium uptake level for Mab4 sampled from the high-density phase versus Mab4 from the low-density phase, at both the intact protein and peptide levels. The HDX results suggest that mAb structural changes occur during LLPS, involving multiple regions in the mAb as shown in the homology model. Although the HDX-MS experiments cannot unambiguously map sites on the protein associated with altered structure or protein-protein contacts, our results clearly indicate that mAb conformation and dynamics are perturbed at the local level by LLPS and associated shifts in protein concentration. One possible explanation for these observations is the formation of antibody clusters in the condensed, high-density phase involving specific points on the mAb surface. The overall decrease in the deuterium uptake for molecules in the higher-density phase may also be influenced by molecular crowding, which may act to rigidify the domain movements. Significant deuteration differences observed in the Fab region can be rationalized by a combination of crowding effects and the formation of mAb oligomers with protein-protein interfaces associated with the Fab and Fc regions of Mab4.
In summary, we developed a novel dilution-free HDX-MS strategy and demonstrated the application of this method for a comparative conformational analysis of mAbs in a phaseseparated sample. In the HDX-MS monitored at the intact protein level, measured masses of deuterated Mab4 sampled from the high-density phase was constantly lower than Mab4 extracted from the low-density phase, suggesting mAb structural changes induced by phase separation. A more comprehensive HDX-MS analysis at the peptide level provided localized structural information. Our results were mapped on a homology model, highlighting the Fab and Fc regions that are likely involved in either local conformation changes or protein-protein association events at high concentrations. Although specific interaction sites were not explicitly mapped, this HDX-MS method can be used to directly measure the structural impact of high protein concentration. Ongoing efforts in experimental method development and data processing will continue to build and refine HDX-MS approaches into validated methods that can be combined with orthogonal biophysical tools to increase our understanding of protein structures over an ever-wider array of therapeutically relevant conditions.
Materials
A humanized IgG4 monoclonal antibody (referred to as "Mab4") was expressed, purified and formulated at Eli Lilly and Company. Deuterium oxide (99.9% atom D) was were purchased from ThermoFisher Scientific. All other chemicals were purchased from Fisher Scientific.
Buffer screening experiments
Ten millimolar citrate buffers at pH 5.5, 6, and 6.5 were prepared by dissolving citrate acid solid and monosodium citrate solid at specific ratios. Sodium chloride solid was weighed and dissolved in the citrate buffer to keep the ionic strengths. Buffer pH was measured and adjusted using a calibrated pH meter at room temperature. Mab4 was buffer exchanged through overnight dialysis using the 10K MWCO dialysis device. After protein dialysis, Mab4 samples were stored at 5ºC at least overnight to allow phase separation. After phase separation, the concentrations of Mab4 in the two phases were measured with UV-Vis at 280 nm. The results were discussed in the Supporting Information and were used to determine the buffer condition for the following phase separation study.
Phase separation sample preparation for HDX-MS
Ten millimolar citrate buffer with 50 mM NaCl at pH 6 was chosen for phase separation study. Two buffers were prepared for HDX-MS, using water or deuterium oxide as a solvent. The pH of the buffers were direct readouts from the pH meter without any correction for the isotope effect. Two fractions of Mab4 samples at the concentration of 50 mg/mL were prepared. One Mab4 fraction was dialyzed into the buffer prepared in water and another fraction was dialyzed into the buffer prepared in D 2 O. Mab4 samples were then incubated at 5ºC for one week, allowing phase separation. Longer incubation time also permits the hydrogen-deuterium exchange to reach equilibrium for Mab4 prepared in D 2 O buffer. The concentrations of the upper and lower phases were measured to be 28 mg/mL and 150 mg/mL, respectively, by UV-Vis at 280 nm.
Biophysical assaysdifferential scanning calorimetry and dynamic light scattering Protein samples were taken from the two separated phases and diluted to 1 mg/mL using the same buffer. DSC measurements were performed on a MicroCal DSC instrument (Malvern Panalytical technologies). Temperature was ramped from 25ºC to 90ºC at 1ºC/min rate. The buffer-buffer baseline was measured before running the protein sample. The baseline subtracted thermograms were plotted. The onset temperature and max temperature for unfolding transition were obtained from the DSC data.
DLS measurements were performed on four protein concentrations: 0.5 mg/ml, 1 mg/ml, 2 mg/ml, and 3 mg/ml. Protein samples were taken from the two separated phases and diluted to the target concentrations. The interactions parameter (k D ) value was then determined by a linear fit of the measured (mutual) diffusion coefficients (D m ) as a function of concentration.
Global hydrogen-deuterium mass spectrometry
For proteins in the upper phase with the concentration of 28 mg/mL, 10 µL of sample in water buffer was taken and mixed with 10 µL of sample in D 2 O buffer. The mixture was incubated in the LC autosampler at~5ºC. LC-MS sequence was set up to inject the sample at 100 s, 460 s, 7300 s, 1000 s, 10000 s, 20080 s, and 29980 s. The HDX reaction was quenched once the protein sample was loaded into the LC sample loop and mixed with acidified mobile phase. The protein sample was desalted and eluted on a reverse phase column (Agilent PLRS 1 × 50 mm, 1000 Å, 5 µm), using mobile phase composed of 0.05% TFA in H 2 O and 0.04% TFA in acetonitrile. The LC column was kept in an ice bath to minimize back exchange. Following on-line LC separation, MS analysis was performed on a Water Synapt G2-Si Q-Tof mass spectrometer. For protein sample in the lower phase with the concentration of 150 mg/mL, global HDX-MS analysis was performed following the same protocol as above.
Bottom-up hydrogen-deuterium exchange mass spectrometry A quench buffer containing 0.45 M TCEP, 3.6 M GdnHCl and 0.18 M phosphate at pH 2.3 was prepared and equilibrated at 0ºC. For analysis of Mab4 in the upper phase, 2 µL of protein sample in H 2 O buffer was mixed with 2 µL of the sample in D 2 O buffer and incubated at 5ºC for five labeling time points: 30 s, 100 s, 1000 s, 2000s, and 10000 s. At each time point, the exchange reaction was quenched by quickly adding 60 µL of quench buffer at 0ºC, followed by the dilution with 60 µL of 0.1% FA, pH 2.5. The total time for quench and dilution was carefully controlled at 1 min. The labeled and quenched sample was then subject to protease digestion by incubating with 8 µL of 10 mg/mL pepsin at 0ºC for 3.5 min. For analysis of proteins in the concentrated phase, the exchange reaction was carried out and quenched in the same fashion, except that 28 µL of 10 mg/mL pepsin were used to produce more effective digestion due to the higher protein concentration. Consequently, the volume of 0.1% formic acid added to the higher-density phase sample was lowered to 40 µL, in order to keep the sample dilution levels consistent with the sample from the low-density phase.
The digested sample was immediately analyzed by LC-MS. Peptides were separated on a C18 column (Waters ACQUITY UPLC CSH C18, 1.7 µm, 2.1 × 50 mm). To minimize back exchange, the LC column was kept in an ice bath. Mobile phase was comprised of H 2 O and acetonitrile, both containing 0.1% FA. An acetonitrile gradient from 10% to 50% was used to elute the peptides. The eluents were directly analyzed by a Thermo Scientific Orbitrap Fusion Lumos Tribrid™ Mass Spectrometer operating in positive mode.
HDX-MS data analysis
Masslynx (Waters Corp.) was used to process global HDX-MS data. The zero-charge mass spectrum was generated by performing the MaxEnt deconvolution. The global HDX-MS kinetic plot was created by plotting the measured intact mass of the deuterium-labeled mAb against the reaction time. For peptide level HDX-MS data, MS/MS data collected for the control sample was processed using Proteome Discoverer (Thermo Scientific) to generate a reference peptide list. The HDX-MS data were then analyzed using Mass Spec Studio. 25 Briefly, both a master peptide list and the raw MS data were input into the software to produce initial peptide identifications. Peptides identified based on both their monoisotopic mass and retention time were then manually validated. Though Mass Spec Studio cannot directly deconvolute the bimodal distributions detected for our deuterated peptide signals, it is capable of estimating the deuterium content by fitting a subset of isotopic peaks to an isotope expansion model. Statistical analysis was performed to calculate the averaged standard deviation of deuterium uptake across all peptide replicates. In addition to the 2x standard deviation criteria, a two-tailed Student's t-test was performed using pooled standard deviation to calculate the p-values from the replicate data on a per-peptide basis. A homology model was built based on the crystal structure of IgG4 (PDB: 5DK3) using PyMod 2.0 within Pymol. 26,27 Statistical analysis of the HDX-MS was performed using the statistical analysis module in Mass Spec Studio and the results were visualized using our homology model. | 2019-03-21T13:02:51.167Z | 2019-03-19T00:00:00.000 | {
"year": 2019,
"sha1": "4ef495ecdb41b784a1a78bb95b7b45af275ccecd",
"oa_license": "CCBYNCND",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/19420862.2019.1589850?needAccess=true",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "4ef495ecdb41b784a1a78bb95b7b45af275ccecd",
"s2fieldsofstudy": [
"Medicine",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
} |
251990401 | pes2o/s2orc | v3-fos-license | The Identification of Chinese Herbal Medicine Combination Association Rule Analysis Based on an Improved Apriori Algorithm in Treating Patients with COVID-19 Disease
In this work, an improved Apriori algorithm is proposed. The main goal is to improve the processing efficiency of the algorithm, and the idea and process of the Apriori algorithm are optimized. The proposed method is compared with the classical association rule algorithm to verify its effectiveness. Traditional Chinese medicine plays a certain role in the prevention and treatment of COVID-19. In order to deeply mine the association rules between Chinese herbal medicines for the prevention and treatment of COVID-19, this improved Apriori algorithm is applied from the retrieved published scientific literature and the guidelines for the prevention and treatment of COVID-19 published all over China. Based on the representation of traditional Chinese medicine data in binary form, the potential core traditional Chinese medicine combinations in the treatment of COVID-19 are identified. The results of association rules of Chinese herbal medicine data obtained from the real database provide an important reference for the analysis of COVID-19 combined treatment of Chinese herbal medicine.
Introduction
In recent years, under the background of the re-recognition of the value of Chinese traditional medicine [1] and the gradual maturity of data mining technology, in order to promote the further development of traditional Chinese medicine and realize the modernization of traditional Chinese medicine, the research in the field of traditional Chinese medicine data mining is gradually active. Researchers have gradually realized the combination of data mining, machine learning, artificial intelligence, and other technologies in the research field of traditional Chinese medicine. ey hope to discover the hidden principles and laws through mining, analysis, induction, and summary of a large number of clinical experience data accumulated by traditional Chinese medicine workers for thousands of years.
Since December 2019, many pneumonia cases of unknown origin have been found in many countries and regions around the world. On February 11, 2020, the disease caused by the new coronavirus was officially named coronavirus disease-19, referred to as COVID-19 [2]. e pandemic has wrought serious negative effects on the global economy and society.
As a well-practiced therapeutic modality, traditional Chinese herbal medicines play a complementary role in alleviating the symptoms of certain diseases and improving the health-related quality of life among COVID-19 patients [3]. It has been widely accepted that the choice and combination of Chinese herbal medicines are vital for successful Chinese drug treatment [4]. e principles for choosing and combining Chinese herbal medicines are based on the Biaoben theory [5] and Meridian theory [6] in ancient Chinese therapy.
e Apriori algorithm is a type of association rule mining algorithm, and it proceeds by identifying the frequent individual itemsets in the database [7]. e Apriori algorithm is often used to analyze the combination of prescriptions and acupuncture points in the treatment of diseases by traditional Chinese medicine. e Apriori algorithm is widely used in many fields, for example, to explore the main influencing factors and the interaction of factors in dangerous driving conditions of urban traffic [8], in the causal analysis of bridge deterioration [9], the employment trend analysis of college graduates [10], analysis of fault items in power optical transmission network [11], finding frequent patterns in live transportation data [10], and in the mining of association rules applied in traditional Chinese medicine. For example, prescription analysis for the treatment of impotence [12], optic atrophy [9], and so on [13][14][15][16][17]. e Apriori algorithm is often used to analyze the combination of prescriptions and acupuncture points in the treatment of diseases by traditional Chinese medicine. Table 1 shows the differences between relevant studies and this study.
Starting from the comprehensive consideration of the redundancy of traditional Chinese herbal medicines treatment medicine data and the difficulty of rule mining, this article optimizes the idea and process of the Apriori algorithm with the goal of improving the processing efficiency of the algorithm and deeply mining the association rules between Chinese herbal medicines, and puts forward an improved Apriori algorithm. e improved algorithm is simulated and compared with the classical Apriori algorithm to verify its effectiveness. e calculated association rule results of Chinese herbal medicine point data provide an important reference basis for the analysis of Chinese herbal medicine combination in the treatment of COVID-19.
Section 1 introduces some background and presents some related work. Section 2 gives some concepts of association rules. Section 3 describes the improved Apriori algorithm. Section 4 demonstrates the case study and result analysis. Finally, Section 5 concludes the article.
Association Rules.
Association rule mining is a basic data mining method used to mine interesting associations or correlations between itemsets from large-scale data sets. It is very helpful for data classification, clustering, and other data mining tasks. e formal description of association rules is as follows [10][11][12]: Dataset D is a collection of all things in the database. Each attribute of each record in the dataset is called an item, and the collection of attributes is called an itemset. Each nonempty record is called a transaction T.
Let X and Y be the two itemsets contained in transaction T, that is, X and Y are both proper subsets of T. If X is a nonempty subset, Y is also a nonempty subset, and the intersection of X and Y is an empty set, then X-> Y constitutes an association rule in the thing set T.
Support.
is is to say that an association rule is an expression in the form of X -> Y, where X is called the preceding term and Y is called the following term. e probability that both X and Y are contained in the itemset is called the support of X -> Y, denoted support (X -> Y) � P (X -> Y).
Confidence.
Under the condition that the prerequisite X of the association rule occurs, the probability that the association result Y occurs, that is, the probability that the itemset containing X contains Y at the same time, is called the confidence level of association rule X -> Y, denoted as confidence (X-> Y).
Lift.
e ratio of the possibility of including Y under the condition of X and the possibility of having Y in the itemset without this condition is called the lift of the association rule, denoted as Lift ( Association rule mining can usually be regarded as two basic processes: ① find all frequent itemsets from the transaction set, that is, find all itemsets whose support is greater than the given minimum support threshold; ② use the frequent itemsets found in the first step to generate all association rules, and the association rules that meet the minimum confidence are the strong association rules to be mined.
Apriori Algorithm.
e algorithm uses a layer-by-layer search iterative method to find the largest k-term frequent set. First, the database is traversed and searched to get the candidate 1 itemset and its support. If its support is lower than the minimum support, it is pruned to get the frequent 1 itemset. en, the obtained frequent 1 itemsets are connected to obtain the candidate 2 itemsets and their support, and so on. is is iterated until the frequent K + 1 itemsets cannot be obtained, and the corresponding frequent K itemset is the output result [9,13,14]. e Apriori algorithm's a priori property is that the subset of all frequent itemsets must be frequent itemsets. According to the properties, a corollary is obtained that the superset of infrequent itemsets must be infrequent [15,16]. Using this property and inference, we can mine all levels of frequent itemsets that meet the threshold of support and credibility.
Apriori algorithm is widely used in many fields, [17], [18], [19] as mentioned above, and the mining of association rules applied in traditional Chinese medicine are basically Apriori algorithms. For example, prescription analysis for the treatment of peptic ulcers [20], leukaemia [21] and so on.
3.1.
e Idea of the Improved Apriori Algorithm. Generally, the ways to improve the process of mining frequent itemsets include reducing the generation of candidate itemsets and reducing the number of transaction records to be compared when obtaining itemset support. e improved ideas are as follows.
(1) Strong association rules are established, unrelated single transaction items are deleted, some association relationship between items is found, and their association is mined. In the process of generating frequent items, the Apriori algorithm needs to scan the huge transaction dataset many times and delete irrelevant transaction items, so as to reduce the dataset to a certain extent and improve the operation efficiency. (2) Row column compression through the Boolean matrix is done to reduce the scanning times of the transaction database [8,[22][23][24][25][26][27]; in the process of scanning, the candidate itemset is replaced in the form of an index table, which avoids the trouble of generating a large number of candidate itemsets. (3) When searching frequent itemsets and calculating confidence, a Trie tree is used to speed up the search. A Trie tree is a data structure commonly used in data mining algorithms. is data structure occupies less memory and can quickly build and mine the effective information in the tree [28]. Many prefix tree-related technologies are applied to the algorithm of frequent itemsets mining to improve the execution efficiency of the algorithm. Collect natural driving data, extract risk conditions, and analyze the direction and intensity of risk influencing factors with the confidence of association rules of the Apriori algorithm.
Road traffic driving Ordinary Apriori algorithm
Weidi et al. [9] 2021 e Apriori algorithm is used to analyze the causal association rules of bridge deterioration in Yunnan Province
Bridge construction
Genetic algorithm and grey correlation analysis solve the problem of the value of support and confidence in the Apriori algorithm Luo et al. [10] 2021 Based on the scores and employment information data of higher vocational college graduates during their school years, this article uses the Apriori algorithm to analyze the correlation between school performance and actual employment.
Algorithm Procedure
Step 1. e database is traversed once and irrelevant transaction item records are deleted. e total number of transaction items is set as m and the traversal database as D. When D x (x � 1, 2, . . . , m)count � 1, D x is deleted and traversed repeatedly to get a new dataset Step 2. e transaction matrix is received, and the rows and columns are compressed.
e transaction dataset D ′ is converted into matrix Mat, where transactions are sorted in column order and itemsets are sorted in row order. e matrix is represented as follows: (1) If the i-th itemset is in the j-th transaction, the value d ij of row i and column j of the matrix is 1; otherwise, it is 0; hence, the Boolean matrix is obtained. rough the Boolean matrix obtained in the previous step, the support of the itemset formed by a row in the matrix can be calculated. e support is obtained by the bitwise sum operation of each row of vectors.
According to the Boolean matrix and the calculation method of support, the support of each set is obtained, and the itemset index table is obtained. en, the frequent itemsets are obtained by comparing them with the set minimum support. According to the nature of frequent itemsets, if an itemset is nonfrequent, then all supersets of the itemset are also nonfrequent, which can be deleted directly, that is, row compression. Since each transaction of the Boolean matrix corresponds to a column vector, if the length of a transaction is less than k, it is impossible to include k-frequent itemset L_ k. e transaction can be deleted directly during the search, that is, column compression.
Step 3. e compressed Boolean matrix is scanned again, the support is calculated, and the index table was created. e above steps are repeated until k-frequent itemsets can no longer be generated, and finally, all frequent itemsets are presented in the form of an index table.
Step 4. Finally, all frequent itemsets are searched in the form of a Trie tree to calculate the confidence, so as to generate strong association rules, that is, the association rules that users are interested in.
Algorithm Explanation
(1) Avoid database scanning many times.
e data records can be replaced with the encoded sets after only scanning the database twice. After that, all frequent itemsets can be obtained only through operations in memory.
us, the efficiency of the algorithm is improved.
(2) Binary operation is used to replace the operation between sets in the execution of the Apriori algorithm, which improves the execution efficiency of the algorithm. (3) A Trie tree is an advanced data structure that is sometimes also known as a prefix tree or digital tree. It is a tree that stores data in an ordered and efficient way. Using a Trie tree improves the algorithm efficiency.
Case Illustration
e Chinese herbal medicine data of Chinese medicine treatment for COVID-19 are selected for the experiment, and the Apriori algorithm, FP growth algorithm (FP stands for frequent pattern) and improved Apriori algorithm are compared and analyzed. e program, written in Python 3.8.3, simulates and analyzes the different values of other parameters of the algorithm, including support and confidence. According to the simulation results, the algorithm with strong applicability is selected and reasonable parameters are set for deeply mining the hidden association rules between Chinese herbal medicines. e simulated hardware environment is Intel (R) core (TM) i7-10875H CPU @2.30 GHz 16.0 GB RAM.
Chinese Herbal Medicine Data.
is study was conducted based on the pharmaceutical prescriptions that have achieved good preventive and therapeutic effects in practice.
We searched the treatment literature on CNKI and the official treatment plan all over China. CNKI is a key national research and information publishing institution in China. Its first database was the China Academic Journals Full-text Database. In 1999, CNKI started to develop online databases. To date, CNKI has built a comprehensive China Integrated Knowledge Resources System, including journals, doctoral dissertations, masters' theses, proceedings, newspapers, yearbooks, statistical yearbooks, ebooks, patents, and standards. e plan clearly aims at the prevention and treatment of new coronary pneumonia, until November 19, 2020, which was published on the official website of the National, Provincial, Autonomous Region, and Municipal Health Commission. e prescriptions in the plan were extracted and screened. Single Chinese medicine, incomplete composition and dosage of prescription, recommended Chinese patent medicine prescription, and prescriptions not clearly signed by the recommended prescription department or unit were excluded. Chinese medicine prevention and treatment plan are presented in Table 2. In the table, TCM means traditional Chinese medicine.
According to the National College of traditional Chinese medicine planning textbook "Chinese medicine" and the 2015 edition of "Chinese Pharmacopoeia", the traditional Chinese medicine names entered are standardized.
Model Building.
e Apriori algorithm, FP growth algorithm, and improved Apriori algorithm model are created, respectively. e different values of other parameters of the algorithm are simulated and analyzed, including support and confidence. According to the simulation results, the algorithm with strong applicability is selected and reasonable parameters are set to carry out the association rule mining of acupuncture treatment for COVID-19 Chinese herbal medicine data. e modeling process includes the following: inputting sample data and modeling parameters; comparing the operation efficiency of the Apriori algorithm, FP growth algorithm, and improved Apriori algorithm under different parameter settings. According to the simulation results, the algorithm with strong applicability is selected for modeling and simulation; after processing the treatment Chinese herbal medicine database and inputting parameters, the association rules between Chinese herbal medicines are the output, and then the results of association rules are analyzed.
Using the Chinese herbal medicine dataset, the algorithm before and after optimization is simulated and compared with the FP growth algorithm, and the variation of running time with two parameters of support and confidence is analyzed, as shown in Figure 1 and Figure 2. Figure 1 shows the comparison of the changes in the minimum support before and after the improvement. With the increase in support, the running time of the algorithm before and after the improvement is shortened. When the support is small, the running time of the improved algorithm is less than that of the Apriori algorithm before the optimization and FP growth algorithm. e greater the support, the more important the association rules are, and the shorter the running time is.
As shown in Figure 2, the comparison between the execution time of the two algorithms before and after improvement and the change of the minimum confidence parameter is shown. With the increase in confidence, there is little difference in the running time between the two algorithms. When the confidence is small, the running time of the improved algorithm is less than that of the Apriori algorithm before optimization and FP growth algorithm, and the reliability of association rules is the strongest at this time.
In conclusion, under the same database conditions and different parameter settings, it is found that the operation efficiency of the improved Apriori algorithm is significantly better than that of the FP growth algorithm, and the effectiveness of the algorithm has been fully verified. erefore, this article applies the improved Apriori algorithm to model and simulate, and deeply mines the Chinese herbal medicine association rules. e minimum support of the parameter value is 13%, and the minimum confidence is 60%.
Algorithm Performance Verification and Result Analysis.
According to the above operation results, 4768 association rules are obtained (such as (Shengshigao) � > (Xingren)), which represent Chinese herbal medicines Shengshigao and Xingren, the support and confidence of which simultaneous occurrence are 15% and 73%.
We extracted binary data from the original 237 Chinese herbal medicine prescriptions (Figure 3). ere were 237 Chinese herbal medicines extracted from the 242 retrieved prescriptions in the retrieved references and plans. We carried out frequency analysis, calculated the frequency of drug use in the prevention and treatment plan, and got the high-frequency core drug. e Chinese herbal medicine frequency distribution details are presented in Figure 4. Gancao, Huoxiang, Xingren, Fuling, Chenpi, Lianqiao, Maidong, Shengshigao, Jinyinhua, Huangqi, Cangzhu, Houpu, Jiegeng, Chaobaizhu, Shenghuangqi, Yiyiren, Fangfeng, Lugen, Fabanxia, and Chaihu were the top 20 frequently selected Chinese herbal medicines. As shown in Figure 4, these drugs are often used to treat colds, pneumonia, cough, and other symptoms and diseases.
Improved Apriori Algorithm-Based Association Rule Analysis for Itemsets of Chinese Medicine Combination Items.
We investigated 4768 association rules based on the integrated Chinese medicine data. e association rules were visually presented based on the scatter plot, and the lift of a rule was the ratio of the observed support to that expected if X and Y were independent ( Figure 5). e results demonstrated that all rules had a high lift. e association rules between different individual Chinese medicines were ordered by support. e top 20 improved Apriori algorithm-based association rules of Chinese medicine are listed in Table 3, among which, "LHS" stands for left-hand side and "RHS" stands for righthand side. For example, No. 1 means association rule (Shengshigao)-> (Xingren) which has a support of 0.15126050, a confidence of 0.7346939, a coverage of 0.20588235, and a lift of 2.534161. is rule has occurred 36 times in the dataset.
Graph-based visualization by color or size was used for the grouped itemsets. Based on a grouped matrix of these 20 association rules, the features were visually exhibited (Figure 6). is figure clearly represented the association rules and was suitable for very small sets of rules to avoid chaotic expression.
After analysis, it is found that the highest high-frequency drugs include Gancao, Xingren, Huangqin, Lianqiao, etc. Modern pharmacology shows that glycyrrhizic acid and glycyrrhetinic acid in Gancao have antiviral effects and can significantly inhibit virus replication [26]. Modern pharmacological studies indicate that Huangqin, Lianqiao, etc. have antiviral activity, as well as cough relieving and expectorant effects. Jinyinhua is also reported to have antiviral effects [27].
rough the analysis of association rules, it is found that the medicine group with higher confidence is (Tinglizi, Xingren)-> (Shengshigao); the above medicine group plus Mahuang and Gancao can form a new group of Maxing Shigan decoction. e whole prescription has the effects of pungent cooling, lung-clearing, and asthma relieving. Among them, ese are Chinese medicine treatmentterminology, such as "lung-qi," "lowering lung" "promote qi." Gypsum clears and relieves lung heat, and licorice nourishes qi and neutralizes various medicines. Modern pharmacological studies have shown that Maxing Shigan decoction [27] has a wide range of effects on respiratory diseases; has good anti-inflammatory, anti-flu, and immune-improving effects; and can play the role of chemical drug oseltamivir in anti-influenza virus ceramidase activity. is prescription has been valued in the prevention and treatment of H1N1 influenza, avian influenza, and SARS, and is worthy of further clinical research and promotion.
In clinical practice, the application of traditional Chinese medicine is usually used to treat patients by using a combination of multiple traditional Chinese medicines instead of a single medicine. In the theory of Chinese medicine, for "combined" Chinese medicinal decoctions, the technical term is "compatibility," which means to selectively combine two or more drugs, and the important thing is to determine the drug combination rather than a single drug.
Conclusion and Future Work
Aiming at the low efficiency of the Apriori algorithm, an improved association rule mining model is established in this article by mining the strong association rules between items, reducing the number of database scans, and putting forward an improved algorithm. We selected Chinese herbal medicine data for treating COVID-19 to mine hidden association rules between Chinese herbal medicines and frequent Chinese herbal medicine combinations. e simulation results showed that the improved algorithm can meet the requirements of Chinese herbal medicine association rule mining, improve the efficiency of data processing, and the reliability of Chinese herbal medicine in the treatment of COVID-19 association rule mining, and has good application value.
Besides, the algorithm used here can be further improved. e next step is to consider using the weighted Apriori algorithm if the weight is available. Furthermore, association rule analysis is just one method of data mining.
Data Availability e authors confirm that the data supporting the findings of this study are available within the article.
Conflicts of Interest
e authors declare that they have no conflicts of interest. | 2022-09-02T15:24:13.338Z | 2022-08-31T00:00:00.000 | {
"year": 2022,
"sha1": "63dc59cb812fc447b6a1256f86d338a8b7450c20",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/jhe/2022/6337082.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "36bbdb26c8376d6b5c4848faeb4407c9895554d0",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
251712571 | pes2o/s2orc | v3-fos-license | Go/no-go task performance of Japanese children: Differences by sex, grade, and lifestyle habits
Background Japanese children face critical psychological challenges that urgently need to be addressed. Objective This study aimed to clarify performance differences in go/no-go tasks among Japanese elementary and junior high students by sex and grade and comprehensively investigate the relationship between children's lifestyle habits and performance. Methods In total, 4,482 (2,289 males, 2,193 females) 1st grade elementary to 3rd grade junior high students (6–15 years old) participated. We conducted a survey and the go/no-go experiments in the participating schools on weekday mornings from November 2017 to February 2020. We collected data on the number of errors in the go/no-go tasks in response to visual stimuli (commission errors in the no-go tasks; omission errors in the go tasks); and on lifestyle habits (i.e., sleep, screen time, and physical activity) using questionnaires. Results For the commission errors, the results demonstrated differences by sex and grade; for the omission errors, differences were only observed by grade. Additionally, we analysed the relationship between both types of errors and sex, grade, sleep conditions, screen time, and physical activity using binomial logistic regression analysis. Commission errors were significantly related to sex and grade whereas omission errors were related to grade, bedtime, screen time, and physical activity. Conclusions Our results highlighted that children's cognitive functions are related to their lifestyle habits (i.e., sleep conditions, screen time, and physical activity) in addition to sex and grade.
Introduction
The UNICEF Innocenti Report Card 16 (1), published in September 2020, uses a variety of comparable data from 38 Organisation for Economic Co-operation and Development or European Union member countries to rank the well-being of children in each country in three domains: good mental well-being, good physical health, and skills for life. The results showed that Japanese children ranked first in "good physical health", .
/fpubh. . but extremely low in "good mental well-being", at 37th place. Suicide, bullying, violence, and long-term absenteeism have been identified as social problems among Japanese children. The United Nations Committee on the Rights of the Child has expressed its concerns in the Concluding Observations on the Combined Fourth and Fifth Periodic Reports concerning Japan in sections on "Research the root causes for suicide among children, implement preventive measures", in paragraph 20 (b); and on "Implement effective measures against bullying", in paragraph 39 (a) (2). Furthermore, paragraph 20 (a) goes so far as to identify the need to "Take measures to ensure that children enjoy their childhood, without their childhood and development being harmed by the competitive nature of society." Additionally, since the 1990s, problematic events such as a "breakdown in classroom discipline" and "sudden anger outbursts" have also been reported in Japanese schools (3). These problems suggest that Japanese children face critical psychological challenges that urgently need to be addressed. Many studies suggest that the physical basis of the mind lies in the brain; consequently, it should be possible to confirm some of the features of the mind by studying brain functions. Thus, many studies on brain structure and function have been conducted using non-invasive measurements of brain activity (e.g., magnetic resonance imaging, electroencephalography, and near-infrared optical topography), which have been regarded as indicating physical reactions of the mind. Through these measurements, it is possible to observe brain function in real-time in terms of perceptions, movements, and cognition. Such scientific advancements have allowed for a deeper understanding of children's development and disabilities. However, although such measurements are non-invasive, their implementation is extremely difficult in the context of childcare and education because they require expensive equipment and specialised technology. Due to these limitations, observational approaches (e.g., cognitive function tests) to measure children's actions and activities, regarded as the output of their brain activity, may be more useful and easily applied. Specifically, many studies have used go/no-go tasks as one type of observational approach.
Casey et al. (4) investigated the development of cognitive function in children aged 4-to 18-years-old, and reported a decrease in reaction time between the ages of 4 and 12 years; likewise, Durston et al. (5) used go/no-go tasks as an index to analyse neurological deficits related to cognitive function in children with attention deficit hyperactivity disorder (ADHD) aged 6 to 11 years; many other studies share similarities with the research mentioned above [e.g., (6,7)]. One of the positive outcomes of such studies relates to the applicability of their findings; they may serve not only as theoretical support to field measurements of go/no-go tasks for children but also provide effective and reliable techniques that are useful when researchers want to develop measures to understand psychological disabilities. This method of measurement has also been long used intensively in Japan (3, 8,9).
Furthermore, many studies have found a relationship between brain function and lifestyle habits, such as between sleep and physical activity (PA); for example, Touchette et al. (10) examined sleep duration patterns, behavioural characteristics, and cognitive function in children from school entry to elementary school. These authors reported a relation between short sleep durations and hyperactivity-impulsivity and between short sleep durations and children's cognitive test results. Other studies have found that sleep deprivation impairs cognitive function [e.g., (11)], and that taking a nap improves executive function tasks and affects prefrontal activity (12). Regarding PA, transient moderate-intensity exercise reportedly accelerates response time in visual choice reaction tasks (13), and the more PA a person undertakes, the faster the reaction to flanker tasks (14). Furthermore, children with low inhibitory control capabilities can be expected to experience greater effects from undergoing PA (15). Recently, researchers have raised concerns regarding the effects of Internet use on brain function. Horowitz-Kraus and Hutton (16) reported that the functional connectivity between the visual word form area and the leftsided language, visual, and cognitive control region of the brain increases with reading but decreases with longer periods of media use. Moreover, another study reported that Internet addicts have widespread white matter abnormalities (17). Given these findings, concerns have arisen in Japan regarding the possible negative effects of Internet use on children's cognitive function, as their use of the Internet is increasing (18).
However, these studies examined the relationship between specific lifestyle habits and cognitive functions using univariate analysis; hence, they did not investigate the possible effects of a combination of the children's different lifestyle habits. In everyday life, children's sleep, physical activity, screen time, and other aspects of their lives are all interrelated (19). This consideration led the WHO to develop 24-h behavioural guidelines that include recommendations on screen time and sleep duration, in addition to physical activity, to ensure the greatest health benefit (20). Therefore, this study aimed to clarify performance differences regarding go/no-go tasks among Japanese elementary and junior high students by sex and grade. Specifically, we intended to comprehensively investigate the relationship between children's lifestyle habits (e.g., sleep, screen time, and PA) and their performance.
Ethics approval and consent to participate
This study was conducted with the approval of the Ethics Review Committee for experiments on humans, Nippon Frontiers in Public Health frontiersin.org . /fpubh. . Sport Science University (Approval No. 017-H092). Before participation, all potential participants and their guardians were provided with a written explanation concerning the purpose and content of the study, and all participants provided written informed consent. Measures were taken to ensure the anonymity of all collected data.
Participants
In total, there were 4,482 (2,289 males, 2,193 females) participants, spanning from 1st graders in elementary school to 3rd graders in junior high school (age range: 6-15 years old). Participants were recruited through snowball sampling from seven public elementary and three public junior high schools in eight Japanese cities (two urban and six suburban). No participants had any medical or psychological problems that could have affected study results.
Measures and procedure
The study was conducted on weekday mornings from November 2017 to February 2020 on days when there were no special school events. We collected data on children's grasp motor responses to go/no-go tasks by visual stimulation and on their lifestyle habits (i.e., sleep, screen time, and PA) through a questionnaire.
Go/no-go task
The present study was conducted following the go/nogo task methodology that has been previously used in Japan (3, 8,9). Specifically, using a cerebral activity measurement program (made by Techno Muscat), data on go/no-go tasks were collected from groups of up to 12 individuals. The setting was a quiet classroom at each school, and there were no other persons present other than the participants and the investigators. Each participant sat on a chair and was separated from the next participant by a partition (Figure 1). The participants were instructed to use a rubber ball, held in their dominant hand, to respond to a visual stimulus (4 cm × 2.5 cm) provided by a light that was projected in front of a box (10.5 cm × 4.5 cm). The box was placed approximately 50 cm from and in front of the participant (metrics dictated by the investigator's rules). For elementary school students, the light stimulus referred to distinct colours (red and yellow lights); for junior high students, the stimulus referred to the distinction between light and darkness (170 nit and 30 nit). The task procedures were carried out in this order: first, the formation, and then the differentiation experiment ( Figure 1), as described below.
In the formation experiment, the participants were instructed as follows: "From now on, the light in front of you will shine (for an elementary school student: "red"; for a junior high student: "bright"). When the light shines, grasp the rubber ball as quickly as possible. When the light goes off, release it". They then practised this ten times. Immediately afterwards, visual stimuli (length: 0.5-1.5 s) were presented five times at random intervals (from 3 to 6 s).
In the differentiation experiment, the participants were instructed as follows: "There will be times when the light will be (for an elementary school student: "yellow"; for a junior high student: "dark"). When this happens, do not grasp the ball. As in the previous case, only grasp it as quickly as possible when you see the light become (for an elementary school student: "red"; for a junior high student: "bright")". They then practised this twice for each task (go task: 2 times; no-go task: 2 times). Immediately afterwards, visual stimuli (length: 0.5-1.5 s) were presented 22 times (11 trials for each task, go and no-go) at random intervals (from 3 to 6 s). The data from the differentiation experiment were then analysed.
Previous studies have used similar experimental procedures related to go/no-go tasks with at least 100 trials. However, this study used fewer trials (5 and 22 for the formation and differentiation experiments, respectively) because the experiment involved children in a childcare/education context (in which time is limited) and this number of trials has been shown to be efficient and useful in Shikano and Noi's (8) study.
Questionnaire
Children's lifestyle habit data were collected using a self-administered questionnaire. We utilised a collective survey method. The survey was conducted in a classroom other than the one used for the go/no-go tasks, but both procedures were always conducted on the same day. Owing to first and second-graders' (elementary school) need for longer times to answer the questionnaire and the low reliability of their answers, we did not conduct the survey with them. Hence, we collected questionnaire data from 3,217 (1,664 males, 1,553 females) participants.
Our questionnaire was based on the lifestyle survey conducted by the Committee for Surveillance of Health in School Children and Adolescents (18) and Noi and Shikano (21). The items included: bedtime on the day before the survey; wake-up time on the day of the survey; daily mobile phone, smartphone, tablet, and PC usage time (i.e., screen time); and the number of exercise sessions per week (hereinafter PA). Sleep duration was estimated based on the recorded bedtime and wake-up times.
Data analysis
In this study, we examined the following three points: (1) Participants' lifestyle habits, namely, bedtime, wakeup time, sleeping hours, screen time, and PA. These . /fpubh. .
FIGURE
The go/no-go task in this study. The experiment was conducted in groups of up to participants; (A) A stimulator was placed in front of each participant, and their responses were collected from grasping the rubber balls; (B) Go trials were a red light for elementary school children or a bright light for junior high school children; no-go trials were a yellow light for elementary school children or a dark light for junior high school children; (C) Both trials consisted of phases, totalling phases.
were described by means, standard deviations (SD), and range (lowest value-highest value). Following the aggregation method of two previous studies (18, 19), we calculated and differentiated these variables by sex and grade. Further, we also calculated the number and percentages of participants who did not engage in PA (0 times) and the median number times for those who did. (2) Differences in participants' number of errors in the differentiation experiment by sex and grade (commission errors regarding no-go tasks; omission errors with respect to go tasks). These were compared using non-repeating two-way analysis of variance. Additionally, we used the Bonferroni method to test the simple main effects when significant interactions were observed. (3) The relationship between lifestyle habits and commission and omission errors. After confirming the distribution of these errors, participants with less than the mean value of commission errors (3.3 times) were classified into a "lowvalue group", and those with values equal to or higher than the mean were classified into a "high-value group". A similar procedure was used for the omission errors. Those with 0 errors were classified into a "low-value group", and those with 1 or more errors into a "high-value group". After confirming the distribution of each answer regarding lifestyle habits, we provided similar classifications for the participants in relation to other variables. For bedtime and screen time, participants were classified using the mean and SD. Participants with a mean value that was <-0.5 SD were classified into a "mean <-0.5 SD group"; those with a mean that was −0.5 SD or more and < +0.5 SD were classified into a "mean ± 0.5 SD group"; and those with a mean that was +0.5 SD or more were classified into a "mean +0.5 SD or more group". Regarding PA, there was a substantial number of participants who did not engage in any PA. Participants who engaged in PA 0 times were classified into a "no activity group"; those below the median (excluding those with 0 times) into a "median or below group", and those above the median into a "above the median group". After these classifications, we conducted a multivariate binomial logistic regression analysis (forced input method), with commission and omission errors as dependent variables (low-value group = 0, high-value group = 1), and lifestyle habits (bedtime, screen time, PA), sex, and grade as independent variables. Wake-up time and sleep duration were excluded owing to the possibility of between -variable multicollinearity.
As data on lifestyle habits were not collected from elementary school 1st and 2nd graders, data from elementary 3rd graders to junior high 3rd graders were analysed for the first and third points of the above examinations and data from elementary 1st graders to junior high 3rd graders were analysed for the second point of the above examination. We used IBM R SPSS R ver. 26 software for statistical analysis, with the statistical significance rate set at <5%.
Results
Participants' lifestyle habits Table 1 illustrates the participants' bedtime, wake-up time, sleep duration, screen time, and PA data. Our results demonstrated that, for both male and female students, the general tendency was that as grades increased, sleep duration shortened because bedtime occurred later and wake-up time occurred earlier. For both male and female students, mean screen time ranged from 64.6 to 151.9 min per day, and it increased with grade. Furthermore, 620 participants (19.3%) did not engage in any PA per week. Among those who engaged in PA, the Di erence in errors by sex and age Figure 2 illustrates the cross-sectional transition in participants' errors (commission and omission errors) by sex and grade, and Table 2 Figure 3 illustrates the number of errors for elementary 3rd graders to junior high 3rd graders. Our results demonstrated that the most common frequency of commission errors was 2. The distribution gradually decreased as the number of errors increased (error range: 0 to 11); and the mean value ± SD was 3.3 ± 2.3 errors. In contrast, most participants did not exhibit omission errors, so this had a smaller distribution; nonetheless, there was a maximum of 10 errors.
Characteristics of participants' lifestyle habits
Olds et al. (22) analysed the sleep habits of Australian children and adolescents and reported that their weekday sleep durations were similar to those of Canadian, French, and Swiss children, although slightly longer than those of children in other European countries and significantly longer compared to American children. In comparison to these results, we found that Japanese children have extremely short sleep durations. However, our results were consistent with those found in a survey conducted on Japanese children by the Committee for Surveillance of Health in School Children and Adolescents (18), and which highlight a general trend among Japanese children. Additionally, compared to the survey results mentioned above, screen time for elementary school students in our sample was 2-23 min longer and 50-62 min shorter for junior high students. Thus, our results demonstrated that our sample reported sleeping habits characteristic of Japanese children, such as shorter sleep durations, but that their screen time tended to be slightly longer among elementary school students and shorter in junior high students.
Di erences in go/no-go task performance by sex and age
Our results also indicated that, although a sex difference was observed in the commission errors, this was not the case regarding the omission errors. In contrast to our findings, Brocki and Bohlin (23) conducted multiple executive function tasks (including go/ commission tasks) on 6-to 13-year-olds and found no sex difference in terms of the disinhibition factor (which includes commission errors), but that it was detectable in terms of the speed/arousal factor (which includes omission errors). Generally, among children, commission errors in a continuous performance test reflect impulsivity, while omission errors reflect signs of carelessness (24). Based on the researchers' . /fpubh. . Data were analysed using a two-way ANOVA without repetition. This table reflects data collected from elementary 3rd graders to junior high 3rd graders (n = 3,217). empirical experiences, we consider that Japanese girls tend to be more cautious than their male counterparts. Supporting this observation, Hagekull and Bohlin (25) examined the relationship between preschool temperament, environmental factors, and school-age personality in children aged 8-9 years and found that girls were more careful than boys. Nonetheless, we wished to go further in terms of the theoretical assumptions regarding these errors to better understand the difference in the results between our study and that of Brocki and Bohlin (23). Pavlov (26) explains higher brain functions (i.e., the manifestations of the function of the human cerebral neocortex) in relation to three characteristics: 1. degree of intensity, 2. equilibrium, and 3. lability in the two neural processes (i.e., excitation and inhibition processes). He further states that it is possible to classify higher brain functions into different types when considering these characteristics. Based on this theory, Luria (27) devised a conditioned-reflexes method of grasping motion that is preceded by language instruction, namely, go/no-go tasks. Based on this understanding, we considered that omission errors may have occurred not only because the excitation process was weak (i.e., the participant made an oversight owing to carelessness) but also because the inhibition process was stronger than the excitation process (i.e., when the grasp action was suppressed by inhibitory dominance). Therefore, omission errors can reflect carelessness, inhibitory dominance, or both. As described in section 2.3.1 of this study, in view of various circumstances, our go/no-go tasks had a smaller number of trials than that conducted in many previous studies. Thus, we speculate that our results reflecting participants' impulsiveness in commission errors were not due to their carelessness identified in the omission errors; rather, we believe that the inhibition process may have been more significant in this specific condition. Therefore, the differences between our study and that of Brocki and Bohlin (23), which used 100 trials, regarding results by sex may be explained by the difference in the number of trials. In any case, further investigation is recommended to assess the relationship between the number of trials and the strength of the excitation and inhibition processes.
Our results also showed a significant difference by grade in both commission errors and omission errors. Corroborating this, van der Meere and Stemerdink (28) conducted go/no-go .
Relationship between lifestyle behaviours and commission errors (A) and omission errors (B).
The results were analysed using the data from rd grade of elementary school to rd grade of junior high school (n = , ). β = standardised partial regression coe cient, OR, odds ratio, % CI, % confidence interval.
tests with male students aged 7-12 years and reported that those aged 7-8 years made more commission errors than those aged 9-10 years and 11-12 years. Additionally, Iida et al. (29) conducted go/no-go tasks with male and female students aged 6-12 years and reported that there was a significant negative correlation between age and commission errors under the 80% choice reaction condition in which the go task was 80% and the no-go task was 20%. Brocki and Bohlin (23) also reported that the main effect of grade was significant, and that disinhibition (comprising commission errors) and speed/arousal factors (comprising omission errors) develop with age. Specifically, disinhibition was significantly lower in children aged 9.6-13 years than in those aged 6-9.5 years, and there was a significant tendency towards speed/arousal to rapidly develop among children aged 6-7.5 years through to 7.6-9.5 years. Therefore, our findings accord with all of these findings.
Relationship between go/no-go task performances and lifestyle habits
Our results showed that commission errors were significantly related to sex and grade, and that omission errors were significantly related to grade, bedtime, screen time, and PA. As noted, many studies have reported a relationship between sleep conditions, PA, and cognitive function using univariate analysis. This study is significant in that it analysed the relationship between multiple lifestyle habits and cognitive functions using multivariate analysis, which is likely to enhance understanding of these phenomena and their associations.
Moreover, in relation to omission errors as noted, the inhibition process may be stronger in omission errors when the number of trials is fewer, that is, lifestyle habits may have had a greater impact on the degree of inhibition compared to the impact of impulsiveness and carelessness in this study.
Previous studies have indicated that Japanese children's general lifestyle habits are concerning, as not only do they have short sleeping durations and extensive screen times but also pervasive sedentary behaviour (30,31). These habits may further enhance the strength of the inhibition process beyond the characteristics of temperament. Correlatively, in Japan, the inhibitory type (in which the number of omission errors was higher than the standard) was found to be 0% in a 1969 survey but had increased by a few percentage points in a 1998 survey and was reported to be approximately 10% in a 2008 survey (32). The fact that this type of error was . /fpubh. .
clearly observable in only a small number of children, and that omission errors are strongly related to lifestyle habits, which has been a cause for concern in recent years, highlights an issue that requires further investigation. The mechanism of these errors is not yet known, and clarification on this point is needed urgently.
Limitations and recommendations for future research
Although our study presents relevant additional insight to the literature, it also has limitations that need to be considered in future studies. First, as our study was crosssectional, it could not identify causal relationships; thus, future longitudinal studies are warranted. Second, we utilised a survey to collect data on children's lifestyle habits. Future studies should collect objective data on lifestyle habits while conducting an investigation similar to ours to allow for between-study comparisons. Third, whether each omission error resulted from participants' carelessness/inattention or inhibitory dominance remains to be examined; thus, future studies are warranted to investigate the underlying mechanisms of omission errors using other indicators (e.g., simple and choice reaction time).
Conclusions
We found that children's cognitive functions are related to their lifestyle habits (i.e., sleep conditions, screen time, and PA) in addition to sex and grade. Our results may provide important guidance in reforming daycare and educational practises to address the changing higher brain function profiles of contemporary children.
Data availability statement
The original contributions presented in the study are included in the article/supplementary files, further inquiries can be directed to the corresponding author.
Ethics statement
The studies involving human participants were reviewed and approved by Ethics Review Committee for experiments on humans, Nippon Sport Science University. Written informed consent to participate in this study was provided by the participants' legal guardian/next of kin. | 2022-08-22T14:00:46.927Z | 2022-08-22T00:00:00.000 | {
"year": 2022,
"sha1": "70b3c343d1748046fe557e5f459a2d5528a805e9",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "70b3c343d1748046fe557e5f459a2d5528a805e9",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
252637554 | pes2o/s2orc | v3-fos-license | Mucosal incision-assisted biopsy versus endoscopic ultrasound-assisted tissue acquisition for subepithelial lesions: a systematic review and meta-analysis
Background/Aims Mucosal incision-assisted biopsy (MIAB) for tissue acquisition (TA) from subepithelial lesions (SELs) is emerging as an alternative to endoscopic ultrasound (EUS)-guided TA. Only a limited number of studies compared the diagnostic utility of MIAB and EUS for upper gastrointestinal (GI) SELs; therefore, we conducted this systematic review and meta-analysis. Methods A comprehensive literature search from January 2020 to January 2022 was performed to compare the diagnostic accuracy and safety of MIAB and EUS-guided TA for upper GI SELs. Results Seven studies were included in this meta-analysis. The pooled technical success rate (risk ratio [RR], 0.96; 95% confidence interval [CI], 0.89–1.04) and procedural time (mean difference=–4.53 seconds; 95% CI, –22.38 to 13.31] were comparable between both the groups. The overall chance of obtaining a positive diagnostic yield was lower with EUS than with MIAB for all lesions (RR, 0.83; 95% CI, 0.71–0.98) but comparable when using a fine-needle biopsy needle (RR, 0.93; 95% CI, 0.83–1.04). The positive diagnostic yield of MIAB was higher for lesions <20 mm (RR, 0.75; 95% CI, 0.63–0.89). Six studies reported no adverse events. Conclusions MIAB can be considered an effective alternative to EUS-guided TA for upper GI SELs without an increased risk of adverse events.
INTRODUCTION
Subepithelial lesions (SELs) of the gastrointestinal (GI) tract arise from the muscularis mucosa, submucosa, or muscularis propria. SELs, although most commonly are incidental findings on endoscopy, can rarely present with bleeding, dysphagia, gastric outlet obstruction, and metastasis based on size, nature of the lesion, and location in the GI tract. 1 The detection rate of SELs has increased recently owing to the increased use of screening endoscopies and the advent of technology. 2 Although most SELs are benign, 15% can be malignant at presentation. 3 Hence, appropriate identification and characterization of these lesions are of utmost importance.
Although SELs are routinely identified on endoscopy, endoscopic ultrasonography (EUS) is the first-line modality for characterizing SELs as it provides information regarding the layer of origin, intramural/extramural location, size and shape, echogenicity, vascularity, and associated lymphadenopathy. The initial mode of tissue acquisition (TA) for diagnosis was made using jumbo biopsy forceps with the bite-on-bite technique rather than the standard biopsy forceps. In a retrospective analysis, TA with jumbo biopsy forceps had a diagnostic yield of 60%, with a better yield compared to that of the EUS fine-needle aspiration (FNA) in lesions arising from the submucosal layer (65.1% vs. 37.5%) than the muscularis propria layer (40% vs. 57.1%), but with a higher risk of bleeding when a biopsy was performed on lesions arising from the fourth layer. 4 With increasing availability, EUS-guided TA using FNA or fine-needle biopsy (FNB) is currently the most commonly employed method. However, the diagnostic yield of EUS-FNA is affected by the availability of rapid on-site evaluation by a cytopathologist and the size of the lesion, with lesions less than 2 cm having a poor diagnostic yield compared to that of the larger lesions. 5,6 TA using EUS-FNB obviates the need for rapid on-site evaluation, requires fewer passes, and preserves tissue architecture. However, previous meta-analyses comparing TA using FNA and FNB needles have reported conflicting results. [7][8][9] Since the original description of the technique by Yokohata et al., 10 mucosal incision-assisted biopsy (MIAB) or single incision with a needle knife has gained importance as an alternative method of TA. A mucosal incision line was chosen for this technique, and saline with 0.001% epinephrine was injected submucosally. A mucosal incision was made using an electrosurgical knife. After submucosal dissection, a biopsy of the exposed SEL was performed using conventional biopsy forceps, followed by closure of the mucosal incision with endoclips. 11 In a meta-analysis, Dhaliwal et al. 12 showed a high pooled diagnostic yield of MIAB and relatively shorter operating time. An MIAB variant, endoscopic submucosal dissection-assisted deep biopsy has shown a pooled diagnostic rate of 95% with a very low rate of adverse events (AEs). 13 Given the high diagnostic yield of MIAB, it can serve as an alternative to EUS-guided TA for the diagnosis of upper GI SELs with minimal complications. Hence, the present systematic review and meta-analysis aimed to compare EUS-FNA/B and MIAB for the optimal method of TA.
Information sources and search strategy
The Medline, Embase, Cochrane Central Register of Controlled Trials (CENTRAL), and Science Direct databases were searched from January 2000 to January 2022 for all relevant studies. The following keywords were used for the search: (EUS OR "Endoscopic ultrasound") AND Subepithelial AND (MIAB OR Incision OR Biopsy OR "Needle knife"). Additionally, the reference lists of all identified trials, guidelines, and reviews on the topic were searched for relevant records. This meta-analysis was performed in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. 14
Study selection
Two independent reviewers searched the titles and abstracts of the retrieved search records for the inclusion and exclusion criteria, followed by full-text screening of potential eligible citations. A third reviewer resolved disagreements. Studies included in this meta-analysis were comparative studies fulfilling the following PICO criteria: (1) Patients=upper GI SELs; (2) Inter-vention=use of MIAB or its variants, like submucosal tunneling for TA; (3) Comparison=EUS-guided TA using either FNA or FNB needle; and (4) Outcomes=procedural outcomes, diagnostic outcomes, and AEs. Only the original articles were included in the analysis. There was no bar on language, as long as the study outcomes were mentioned in the text. Non-comparative studies, conference abstracts, case series, and studies involving persons aged <18 years were excluded from the analysis.
Data extraction
Data extraction was independently performed by two investigators. A third reviewer resolved disagreements. Data were collected under the following headings: study author and year, number of patients, age distribution, type of intervention used and comparator arm, follow-up duration, outcomes, and AEs.
Definition of outcomes
The primary outcome of the analysis was a positive diagnostic yield, defined as the percentage of lesions in which a pathologist could make a confirmed diagnosis. The secondary outcomes included the technical success, procedural time, and AEs. Technical success was defined as access to the target tissue and obtaining of visible tissue specimens or fragments. AEs included the development of pain, bleeding, and perforation, which were directly related to the procedure. The procedural time was considered according to the definition of individual studies.
Risk of bias in individual studies
The risk of bias was assessed by two reviewers using the Co-chrane risk of bias (RoB 2) tool for randomized controlled trials (RCTs) 15 and the Cochrane Collaboration's risk of bias in non-randomized studies of interventions (ROBINS-I) tool for non-randomized studies. 16
Statistical analysis
Dichotomous variables were analyzed using the risk ratio (RR) and Mantel-Haenszel test. A random-effects model was used irrespective of the presence of heterogeneity. The Q and I 2 statistics were used to assess heterogeneity among the studies. A p-value of Q test <0.1 or the I 2 value >50% was considered to be significant. Publication bias was assessed by visual inspection of funnel plots. A subgroup analysis was performed based on the size and location of the SEL. A sensitivity analysis was conducted using a leave-one-out meta-analysis, which excluded one study from each analysis to investigate each study's influence on the overall effect-size estimate and to identify influential studies. All statistical analyses were performed using the RevMan software (ver. 5.4.1; Cochrane Collaboration) and STATA software (ver. 17; StataCorp., College Station, TX, USA).
Study characteristics and quality
Total 825 records were identified in the search, and 536 were screened after removing duplicates. Figure 1 shows the PRIS-MA flowchart of the article selection process. Seven studies [17][18][19][20][21][22][23] were included in the meta-analysis. Table 1 summarizes the characteristics of the included studies. The majority of the studies were from Asia, 17-21 one study was from Europe, 22 and another multicenter study involved centers in North America and Europe. 23 Three studies were prospective, 18,20,21 three were RCTs 1,19,23 and one was retrospective. 21 The SEL was located in the stomach in most of the studies, [17][18][19]21 while three studies 20,22,23 included lesions in the esophagus and duodenum along with the stomach. The pooled mean age of the population was 61.1±12.1 years. With respect to EUS-guided TA, four studies used FNA needles, [17][18][19]22 two used FNB needles, 20,23 and one used both. 21 Among the RCTs, only one had a low risk of bias, 23 whereas the other two had a moderate risk of bias ( Fig. 2A). 19,22 Among the non-randomized studies, one study had a low risk of bias, 18 two had a moderate risk of bias, 20,21 and one had a high risk of bias (Fig. 2B). 17
Study or subgroup
Jung 2016 17 Kobara 2017 18 Osoegawa 2019 19 Park 2019 20 Zoundjiekpon 2020 22 Minoda 2020 21 Sanaei 2020 one had pain and two developed delayed bleeding within six days after the procedure. One of the bleeding episodes was self-limiting, whereas the other required arterial embolization.
5) Assessment of publication bias and leave-one-out analysis
Visual assessment of the funnel plot ( Supplementary Fig. 1) and Egger's test (Supplementary Table 1) showed the presence of publications for both technical success and diagnostic accuracy, but not for procedural time. A leave-one-out meta-analysis ( Supplementary Fig. 2) showed a significant change in procedural time (Supplementary Fig. 2C). With the exclusion of the study by Jung et al., 17 EUS-guided TA was associated with a significantly shorter procedure duration than MIAB (MD, -12.34 seconds; 95% CI, -21.05 to -3.64; I 2 =91%). Table 2 summarizes findings with confidence in the evidence.
DISCUSSION
The present meta-analysis attests to the role of MIAB as an alternative to EUS-guided TA in upper GI SELs. The analysis showed a similar technical success rate with EUS-guided TA and MIAB (RR, 0.96; 95% CI, 0.89-1.04), but a lower rate of diagnostic yield with EUS-guided TA (RR, 0.83; 95% CI, 0.71-0.98). On subgroup analysis, EUS-FNB was comparable to MIAB with respect to the diagnostic yield (RR, 0.93; 95% CI, 0.83-1.04). However, the diagnostic yield of EUS-guided TA was lower than that of MIAB for diagnosing lesions less than 20 mm (RR, 0.75; 95% CI, 0.63-0.89). Both techniques were associated with significantly low AE rates. Although the procedural time was similar in both methods on overall analysis (MD, -5.81 seconds; 95% CI, -14.53 to 2.91), the study by Jung et al. 17 was a significant outlier, and with its exclusion, the mean procedural time was lower with EUS (MD, -12.34 seconds; 95% CI, -21.05 to -3.64). The current guidelines by the European Society of Gastrointestinal Endoscopy (ESGE) recommend tissue diagnosis for all SELs with features suggestive of gastrointestinal stromal tumor (GIST), size >20 mm, associated high-risk stigmata on EUS, or prior to surgical resection. 24 The European Society for Medical Oncology 25 and the Japanese GIST Guideline Subcommittee 26 recommend resection of GIST, even those <20 mm. Hence, tissue sampling for pathological and immunohistochemical analyses is a critical step in managing SELs. The ESGE recommends using either MIAB or EUS-guided TA for sampling SELs >20 mm in size. For SELs <20 mm in size, the ESGE recommends MIAB as the first choice, followed by EUS-guided TA. 24 Dhaliwal et al., 12 in a meta-analysis on the outcome of MIAB for upper GI SELs, reported an overall pooled diagnostic yield of 89% without any heterogeneity. In the current meta-analysis, MIAB was associated with a higher chance of diagnosis than EUS-guided TA. Hence, the utility of MIAB in the diagnosis of SELs needs to be explored in larger studies.
Meta-analyses comparing TA with SELs using FNA and FNB needles have reported conflicting results. Two meta-analyses reported better diagnostic accuracy with FNB, 7,8 while another meta-analysis reported no difference in accuracy with FNA/FNB or based on the choice of needle employed. 9 The pooled rates of diagnostic yield in the present analysis with MIAB, EUS-FNB, and EUS-FNA were 90.7% (95% CI, 84.7-96.7), 77.7% (95% CI, 59.8-95.7), and 73.8% (95% CI, 58.4-89.3), respectively. In subgroup analysis comparing the diagnostic rate of EUS-FNB with MIAB in the present study, both methods were comparable in achieving a diagnosis without any heterogeneity (RR, 0.93; 95% CI, 0.83-1.04). Considering the overall quality of the tissue obtained, both EUS-FNB
Study or subgroup
Jung 2016 17 Kobara 2017 18 Osoegawa 2019 19 Minoda 2020 21 Sanaei 2020 Fig. 6. Forest plot comparing the procedural time between mucosal incision-assisted biopsy and EUS-guided tissue acquisition. EUS, endoscopic ultrasound; MIAB, mucosal incision-assisted biopsy; SD, standard deviation; IV, intravenous; CI, confidence interval. and MIAB may be superior to EUS-FNA for the histological diagnosis of SELs. In a study analyzing the factors influencing the diagnostic yield of EUS-FNA for SELs, the diagnostic accuracy was 50% for lesions <20 mm and 91.6% for those >20 mm. 6 Akahoshi et al. 5 reported a diagnostic rate of 71% for lesions <20 mm, 86% for those between 20 mm and 40 mm, and 100% for those >40 mm. This is mainly because the smaller size of the lesion makes it difficult to target using EUS-guided TA techniques. In the present analysis, the diagnostic yield of EUS-guided TA was lower than that of MIAB for small SELs <20 mm (RR, 0.75; 95% CI, 0.63-0.89). The low diagnostic yield of EUS-guided TA in the study by Kobara et al. 18 may be attributed to the fact that approximately 74% of the SEL were smaller than 2 cm. Hence, for SELs <20 mm, MIAB may be considered the preferred option over EUS. However, this rule has a caveat. Kim 27 proposed a classification method to determine whether GISTs were predominantly intramural or extramural. Those with predominant extramural components (types III and IV) may not be adequately sampled using MIAB, and EUS-guided TA may be the preferred modality in these situations.
The present analysis showed no significant difference in the procedural time (MD, -4.53 seconds; 95% CI, -22.38 to 13.31), although there was significant heterogeneity. Therefore, these findings should be interpreted with caution. In a leave-one-out meta-analysis, after excluding the influential study by Jung et al., 17 the procedural timing was significantly shorter with EUS. Thus, MIAB may be associated with longer procedure duration than EUS.
Dhaliwal et al. 12 reported a pooled clinically significant postprocedural bleeding rate of 5.03% (95% CI, 0.4-12.9, I 2 =57.43%) with MIAB, but no perforation. In the present analysis, six [17][18][19][20][21][22] of the seven included studies reported no AEs with either of the procedures. Only the study by Sanaei et al. 23 reported AEs, such as abdominal pain and bleeding associated with both techniques without any significant difference (p=1.0). This indicates the comparable safety of the two procedures. In terms of other available techniques for tissue diagnosis of SELs, Facciorusso et al. 28 compared EUS-FNB with bite-on-bite biopsy using jumbo forceps. The sample adequacy and diagnostic accuracy were significantly higher with EUS-FNB (94.1% vs. 77.5% and 89.3% vs. 67.1%, respectively), with a lower bleeding rate (6.6% vs. 29.1%). However, there are no recent comparative studies that evaluated the outcome of MIAB with jumbo-forceps biopsy.
The findings of this study are important for several reasons. Outcomes.
First, the overall diagnostic yield with MIAB is comparable to that of EUS-guided TA and is better for lesions <20 mm in size. MIAB can be performed during routine endoscopy and no advanced equipment is required. Second, lesion size does not affect the diagnostic yield of MIAB, while needle passage and aspiration of small-sized SELs may be challenging with EUS. Third, MIAB can be easily performed regardless of the anatomic location of the lesion in the stomach, provided a reasonable bulge is visualized endoscopically (type I and II according to Kim's classification). In contrast, EUS-guided TA, especially FNA, can have a higher failure rate when the SEL is in the cardia or fundus because the stiff device has difficulty accessing these areas. 17 To address the suboptimal diagnostic yield of EUS-FNA, a study was conducted using a forward-viewing echoendoscope, which showed a complete histological assessment in 93.4% of patients. 29 However, a subsequent RCT 30 reported comparable rates of histologic diagnosis between forward-and oblique-viewing echoendoscopes (80.5% vs. 73.2%; p=0.453).
There were a few limitations in the study, most inherent to any meta-analysis, which warrant further discussion. First, most of the studies were from a single center and two were retrospective. Second, the RCTs included in the analysis were underpowered to demonstrate a reasonable difference. Third, there was moderate to considerable heterogeneity in the studies with respect to the type of needle used and size of the mass lesions. Moreover, the definitions for procedural time varied among the studies, and one study did not define procedural time. Finally, economic considerations regarding the impact of sampling by MIAB or EUS-guided TA were beyond the scope of the current meta-analysis.
In conclusion, MIAB and EUS-guided TA have comparable technical success, diagnostic accuracy, and procedural time, but with significant heterogeneity. However, MIAB was better than EUS-guided TA for small lesions without heterogeneity. MIAB is an alternative to EUS-guided TA in clinical practice and may be the procedure of choice for SELs <20 mm in size. In centers where EUS expertise is unavailable, MIAB is an easy and safe alternative for EUS-guided TA. Large multicenter trials are the need of the hour to validate the findings of this meta-analysis. | 2022-10-01T15:11:54.096Z | 2022-08-04T00:00:00.000 | {
"year": 2022,
"sha1": "8e51b1373c55f69233c3cbc25e874ea07926e1e9",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "ScienceParsePlus",
"pdf_hash": "8e52e3c451bb37f16c751ec93a1c6a61b667c375",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
253275554 | pes2o/s2orc | v3-fos-license | Efficacy of Middle Meningeal Artery Embolization in Treatment Resistant Spontaneous Intracranial Hypotension Caused Subdural Hematoma : Report of Two Cases and Review of the Literature
Spontaneous intracranial hypotension (SIH) most commonly manifests as bilateral subdural hematoma (SH). SIH cases mostly resolve spontaneously but further treatment would be needed via blind epidural blood patch (EBP). Cerebrospinal fluid (CSF) leakage in EBP-refractory cases can be treated surgically only if the localization of CSF leakage is detectable but it cannot be possible in most of the cases. Also surgical evacuation of SH secondary to SIH (SH-SIH) is not favorable without blocking the CSF leakage. Thus the management of these patients is a challenge and alternative treatment options are needed. Although middle meningeal artery embolization (MMAE) is an effective treatment option in non-SIH SH, there is no report about its application in the treatment of SH-SIH. We present two cases of SH-SIH which their clinical and radiological findings were completely resolved by bilateral MMAE treatment.
INTRODUCTION
Spontaneous intracranial hypotension (SIH) is caused by leakage of cerebrospinal fluid (CSF) and a rare condition with an incidence of five people in 100.000 8) . SIH is mostly presented as bilateral subdural hematoma (SH) radiologically 10) . The majority of SIH cases resolve spontaneously and the healing process is further improved with conservative treatments such as intravenous fluid, bed rest, oral salt, caffeine, theophylline, methylxanthine, and steroids. Epidural blood patch (EBP) can be applied when patients do not respond to conservative and pharmacological treatments, with a success rate of 70% to 90% 5,18) . In failed cases despite multiple EBPs, surgical treatment is performed only if the localization of CSF leakage can be detected 12) . Also SH secondary to SIH (SH-SIH) cannot be evacuated with burr hole drainage sufficiently before the treatment of CSF leakage 17) . In this respect, the management of this patient group is challenging, and the alternative treatment modalities have come into question. Middle meningeal artery embolization (MMAE) is an effective treatment option in non-SIH subacute/chronic SH without surgical indication 13) however, there is no publication about treatment of SH-SIH with MMAE. Considering the surgical approach does not ensure the optimal treatment in EBP refractory SH-SIH, in this study we aimed to evaluate the effectiveness and applicability of MMAE in treatment resistant SH-SIH cases for the first time in literature.
Case 1
A 40-years-old male presented with orthostatic headache and bilaterally subdural hematoma was detected in his cranial computed tomography (CT)-scan with a volume of 24.23, 12.25, and 11.98 mL in total, left side and right side respectively. Subdural hematoma volumes were evaluated by OsiriX ® Dicom Viewer program (Pixmeo SARL, Bernex, Switzerland) and calculated with Manuel Planimetric Method. While dural enhancement was detected in cranial magnetic resonance imaging (MRI) scans, there were no pathology causes to CSF leakage in whole-spinal MRI. The patient was diagnosed as SIH according to International Classification of Headache Disorders criteria 11) after the assessments include anamnesis, examination and MRI scans (Fig. 1). The patient got conserva- Preoperative magnetic resonance imaging (MRI) of epidural blood patch (ebP)-refractory case 1. A : Axial T2-weighted MRI shows bilateral widening of subdural spaces and subdural hematoma (SH) (arrows). b : Axial T2-weighted MRI at the level of the uncus shows obliteration of bilateral sylvian and basal cisterns (red arrows) and bilateral uncal herniation especially on the left side (blue arrow). c : coronal fluid-attenuated inversion recovery MRI shows bilateral SH (red arrows), shrinkage of lateral ventricles (blue arrows), obliteration of the third ventricle and bilateral uncal herniation especially on the left side (yellow arrow). d : Sagittal T1-weighted MRI shows shrinkage of corpus callosum (red arrows), reduction of pontomesencephalic angle, mesencephalic compression due to transtentorial herniation, and tonsillar herniation (blue arrow). e : Sagittal enhanced T1weighted MRI shows epidural venous congestion extending to c4 level (red arrows). f : Axial enhanced T1-weighted MRI shows increase of pachymeningeal enhancement (red arrows) and bilateral SH (blue arrows).
After the 7 days follow-up with conservative treatment, patient did not show any clinical improvement. Then blind EBP was performed with 5-10 mL of 1% lidocaine in a monoplane digital subtraction angiography (DSA) suite (Siemens Artis Zee, Erlangen, Germany) in the left lateral decubitus position, with autologous blood-contrast medium mixture (5 mL of iopamidol). Thirty mL venous blood sample was obtained from antecubital vein and it was administered into the lumbar epidural region with 20-gauge spinal needle. After EBP, patient was observed for 7 days and his medical treatments were continued with resting in the Trendelenburg position. The second session of EBP was performed to the patient because of he did not improve within 7 days after the first session. Conservative treatment procedures were sustained to the patient who got second session of EBP, as done after the first session. Control CT scans were performed at the 6th day of conservative, first EBP and second EBP treatment steps and no regression in he-matoma volumes was noted.
Bilateral MMAE was performed to the patient who did not exhibit any clinical improvement despite 21 days of conservative treatment and two sessions of EBP with 1 week intervals. MMAE was performed under conscious sedation in a monoplane DSA suite. Access was achieved from the right femoral artery with 5 F introducer; following 5000 IU of IV heparin administration, both external carotid arteries were catheterized with 5 F diagnostic catheters, and middle meningeal artery (MMA) catheterization was performed coaxially with 2.7 F Progreat ® microcatheter (Terumo, Tokyo, Japan). One vial of 100-300 micron Bead Block ® (Bead Block; Biocompatibles, Farnham, England) was used until stasis occurred (Fig. 2). Conservative treatment procedures were carried on to the patient after MMAE, as done after EBP sessions. After MMAE, patient was followed-up for 1 week. At the end of first week SH volumes were minimally decreased at CT scan and patient was partially relieved compared to first of onset. Patient was discharged 1 week after MMAE. Control cranial CT and MRI scans were performed at the end of 1st and 3rd months follow- ing the MMAE. Complete resolution of SH was observed in imagings at the end of 3rd month and patient did not have any complaints (Fig. 3).
Case 2
A 39-years-old male patient presented with dizziness and his cranial CT-scan demonstrated bilaterally subdural hematoma with a volume of 25.83, 12.65, and 13.18 mL at total, left side, and right side, respectively (Fig. 4). Dural enhancement was seen in his contrasted cranial MRI but no pathology causes to CSF leakage were detected in whole spinal MRI and MRI myelography. The patient was diagnosed as SIH and conservative treatments were applied for 7 days.
After 7 days, blind EBP was performed since conservative treatment did not provide any relief in complaints and regression of hematoma volume. Then the second session of blind EBP was performed at the 14th day because of the patient did not show any improvement clinically and radiologically. During these periods, conservative treatment was continued without interruption and control CT scans were done at the 6th day of conservative, first and second EBP sessions. No regression of hematoma volumes and complaints of patient were observed at the end of 21 days despite conservative and two sessions of EBP treatments. Bilateral MMAE was performed to the patient at the 21st day of his intervention due to persistent clinical and radiological findings. After the procedure, conservative treatment was maintained. One week after MMAE, minimal decrease in hematoma volume was observed in the CT scan and the complaints were partially reduced. Patient was discharged at first week of the procedure and patient was followed up with cranial CT and MRI scans at the end of 1st and 3rd months of MMAE. At the 3rd month control patient had complete resolution of hematoma in radiological imagings and not any complaints (Fig. 5).
DISCUSSION
The most common natural course of SIH is spontaneous resolution and it occurs within several weeks. Conservative treatment options may support this spontaneous resolution process. Autologous EBP may be performed in patients who do not improve with conservative treatment and has 70-90% success rate 5,18) . Although superiority of myelography-guided targeted EBP to blind EBP was reported 1) , Cho et al. 3) indicated that these two techniques has similar results.
Regarding the current literature for EBP treatment in SIH patients, Wu et al. 18) reported the 98% complete relief after first two EBP sessions which were performed in 2 days intervals; and Chung et al. 5) published the majority of the cases were relieved within 1 month after EBP. In our current issue, we performed MMAE after almost 1 month follow-up and noted promising results in this SIH with bilateral SH patients which have severe symptoms and radiological herniation findings despite two sessions of EBP and conservative treatment. Considering the management of these patient group is challenging and controversial, we concluded that the MMAE can be a beneficial option for SH-SIH patients whose symptoms did not relieve with EBP treatment.
Drainage of collection and irrigation of subdural space with burr hole, twist-drill craniostomy and craniotomy are frequently used surgical techniques in non-SIH subacute/chronic SH 19) . However SH-SIH often progresses into chronic SH 19) ; but evacuating the hematoma before treatment of the CSF leakage may cause hematoma recurrence and even clinical worsening. García-Morales et al. 9) reported recurrence after 2 months of hematoma drainage in a patient with SH-SIH. In a series of 40 patients, Schievink et al. 16) reported that the hematoma did not regress without treatment of CSF leakage in which hematoma had been evacuated prior to the treatment of CSF leakage. The authors further advocated that hematoma drainage is unnecessary and CSF leakage can be treated safely before evacuating the hematoma 16) . Dhillon et al. 6) reported a SIH case that worsened clinically whose hematoma was evacuated initially; therefore, they also recommended repair of CSF leakage as first-step treatment. Other studies in the literature also reported that it is unnecessary to evacuate the hematoma first, which might cause clinical deterioration 4,7) . Although repair of the CSF leakage before surgical SH drainage is an appropriate treatment option for SH-SIH patients, there is no well accepted intervention for SH-SIH cases with no detected CSF leakage origin like our patients. In this respect, MMAE may be an option to close the gap in this area.
In non-SIH subacute/chronic SH without surgical indication, MMAE is an effective treatment option. The effectiveness of embolization in both newly diagnosed and recurrent hematoma was often reported in the literature [13][14][15] . The underlying mechanism of regression of the hematoma with embolization is the disruption of the feeding of the outer membrane of the hematoma originating from the dura. This membrane seems to help the evolution of hematoma, which was demonstrated with several studies radiologically and histologically 2,13) .
In this study, SH-SIH did not regress initially despite conservative and medical treatment and further two sessions of EBP. Therefore, MMAE, as a minimally invasive treatment method, was performed for the first time in the literature in SH-SIH. We observed that the hematoma was completely resorbed, as documented radiologically after 3 months. With this study, we present to the literature that MMAE may be an alternative treatment option in EBP-refractory SH-SIH patients who are not suitable for surgery. Considering the small number of cases in our study, we think that our findings should be supported with studies including more cases, but we believe that our findings will inspire future studies.
CONCLUSION
The diagnosis and treatment of SIH may be challenging. Conservative methods and EBP are the 1st choice for treatment but EBP has a success rate of 70-90%. EBP-refractory cases are surgically treated only if the localization of CSF leakage can be detected. At this point, it was observed that there was an inadequacy in the treatment of a patient group which is resistant to EBP and not appropriate for surgery. Our results showed that MMAE may be an effective treatment method for EBP-reftactory SH cases of SIH. Our study will lead further studies with larger patient groups to investigate this challenging issue.
Conflicts of interest
No potential conflict of interest relevant to this article was reported.
Informed consent
Informed consent was obtained from all individual participants included in this study. | 2022-11-04T18:13:49.221Z | 2022-11-01T00:00:00.000 | {
"year": 2022,
"sha1": "64dc31d1bfc2a24a865de49f95fff852f84af0e0",
"oa_license": "CCBYNC",
"oa_url": "https://www.jkns.or.kr/upload/pdf/jkns-2022-0061.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5bde774daa8b59e3533d0ed55e8fb8bd0f6082f5",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
216061562 | pes2o/s2orc | v3-fos-license | The value of pre-operative MRI in management of penile fractures.
Penile fracture is a urological emergency which requires urgent assessment and surgical intervention to avoid long term complications. In this report, we describe a case in which penile MRI was used for initial assessment and surgical planning. This allowed exact localisation of the tunical tear and allowed direct incision over the tear for repair. In this case, the man avoided circumcision, which would be often required with the conventional degloving approach.
Introduction
Penile fracture is defined as the disruption of the tunica albuginea with a corporeal tear induced by blunt trauma to the erect penis. In the western world the most common cause is traumatic coitus. 1 Penile fracture is generally a clinical diagnosis without the need for imaging, however several imaging modalities are available to confirm diagnosis. These include fluoroscopic guided cavernosography, ultrasonography, and magnetic resonance imaging (MRI). 2 High sensitivity of MRIs in penile fractures have been described previously, 3 however the primary benefit of MRIs is the ability to localise the site of the tunical tear.
Penile fractures are a urological emergency that requires prompt repair of the tunica albuginea to avoid long-term complications such as penile deformity, erectile dysfunction and urinary dysfunction. 1 Various surgical approaches can be undertaken to evacuate the haematoma, identify the tunical injury and repair the defect. The most used approach and the most invasive is the circumferential de-gloving incision. Other approaches include an incision over the haematoma, penoscrotal, and perineal incisions.
We describe of a case of a penile fracture in an uncircumcised man who had a direct incision over the fracture site guided by the preoperative MRI. He avoided circumcision which otherwise could have been inevitable with the de-gloving incision approach.
Case presentation
A 42yr old man presented with penile pain of one hour duration after hearing a popping sound and sudden detumescence whilst having reverse cow-girl sexual intercourse with his wife. However, he reported achieving multiple erections post the initial injury. On examination there was significant penile swelling and bruising ( Fig. 1) -but no areas of tenderness or haematoma was palpable, we were not able to distinguish if a fracture was present proximally, distally or on which side. There was no haematuria. Normally the clinically examination alone would have warranted a penile exploration, possibly through a degloving incision. Given the unclear history of recurrent erections, an urgent MRI was performed within 30minutes of presentation. The patient was in supine position, and his penis taped to the abdomen. Triplanar T2 sequences with and without fat saturation, axial and coronal T1, and sagittal STIR sequences targeted to the previous sequences were obtained. The MRI identified a disruption of the tunica albuginea of the mid shaft of the right corpus cavernosum measuring 11mm wide, with associated 2cm haematoma (Fig. 2).
The patient proceeded to a surgical exploration where a direct 1cm incision was made at the fracture site identified using cognitive guidance in comparison to the MRI images. The incision was further extended due to the large haematoma (Fig. 3). A 1 cm defect in the tunica was identified which repaired with 3/0 ticron mattress sutures. At 3-month review post operatively the patient was achieving erections sufficient for intercourse without medications.
Discussion
MRIs are well known to have high sensitivity and negative predictive value in identifying tunical ruptures in penile fractures, with a recent study reporting a 100% sensitivity and 87.5% specificity within a sample size of 31 patients. 3 We wanted to highlight by reporting on this case, some of the unforeseen utilities of performing pre-operative MRIs in patients with suspected penile fracture.
Firstly, the classical surgical degloving of the penis with a circumferential incision can cause significant morbidity through diffuse swelling of the dartos fascia. A pre-operative MRI can help surgical planning by localising the defect in the tunica albuginea which can allow a direct incision. Furthermore, there is a potential to improve cosmetic outcomes by avoiding circumcision in those who are uncircumcised. Some men may also refuse circumcision. In future, perhaps there is a role of placing skin fiducial markers prior to the MRI which can further help isolate the fracture. 4 Secondly, in cases where clinical history and physical examination may not be adequate in making a clinical decision, an urgent MRI can play a pivotal role in diagnosis. While surgical exploration remains a viable option for diagnosis and management, pre-operative MRI can avoid unnecessary exploration and morbidity. Conversely, an expedited MRI which shows a fracture in these cases can lead to urgent surgical repair to avoid complications and optimise outcomes.
There are however some pitfalls that should be considered. MRIs have a reported lower accuracy, sensitivity of 60%, in identifying concomitant urethral injuries in patients with penile fracture. 5 In suspected penile fracture patients with haematuria, further investigation through cystoscopy should be considered. In an emergent setting, not only can the physical access to an MRI facility be limited, but also there could be a lack of radiological expertise in reporting penile MRIs as it is a rare presentation. Additionally, confidence in interpreting penile fractures on MRI may be low in inexperienced Urologists. 5 And of course, the cost of performing an urgent MRI is always a consideration.
Conclusion
In health facilities that have easy access to MRI scanners, and in men with suspected penile fracture, an urgent pre-operative MRI could be considered as part of their assessment and management. This could potentially reduce morbidity, particularly in uncircumcised men.
Consent
The patient has given informed consent to the usage of clinical history, anonymised photographs and medical imaging for this report.
Declaration of competing interest
None. | 2020-04-09T09:20:51.226Z | 2020-04-06T00:00:00.000 | {
"year": 2020,
"sha1": "244ebbad1fae1134de4cff07a716cdf89b97ba39",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.eucr.2020.101185",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "95bdba20e49ea7498cac868bf73a1ab5e0c055b2",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
247143085 | pes2o/s2orc | v3-fos-license | Biomolecules, Fatty Acids, Meat Quality, and Growth Performance of Slow-Growing Chickens in an Organic Raising System
Simple Summary The increasing demand for nutritionally rich quality products by health-conscious consumers has raised the need to explore alternative farming systems such as organic farming. In this study, we report the efficiency of Korat chickens grown in organic farm conditions. The study demonstrates not only that the slow-growing Korat chicken is suitable for organic farming, but also that the organic raising system improves its growth performance and meat quality. Furthermore, the study unveils a set of biochemical traits using synchrotron radiation-based Fourier transform infrared that show significant differences between the meat quality of chickens raised under conventional and organic raising systems, suggesting their potential use as markers to monitor the meat quality. The findings of this study provide evidence for the potential of organic raising systems for commercial adoption in tropical areas such as Southeast Asia. Abstract This study was to determine the effect of the organic raising system (OR) on growth performance, meat quality, and physicochemical properties of slow-growing chickens. Three hundred and sixty (one-day-old) Korat chickens (KRC) were randomly assigned to control (CO) and OR groups. The groups comprised six replicates of thirty chickens each. The chickens were housed in indoor pens (5 birds/m2), wherein those in OR had free access to Ruzi pasture (1 bird/4 m2) from d 21 to d 84 of age. In the CO group, chickens were fed with a mixed feed derived from commercial feedstuffs, while those in the OR group were fed with mixed feed derived from organic feedstuffs. The results revealed a lower feed intake (p < 0.0001) and feed conversion ratio (p = 0.004) in the OR. The OR increased total collagen, protein, shear force, color of skin and meat, and decreased abdominal fat (p < 0.05). The OR improved fatty acid with increased DHA, n-3 PUFA, and decreased the ratio of n-6 to n-3 PUFA in KRC meat (p < 0.05). The synchrotron radiation-based Fourier transform infrared spectroscopy and correlation loading analyses confirmed these results. In conclusion, our results proved that OR could improve growth performance and meat quality and suggested the raising system be adopted commercially. In addition, the observed differences in biochemical molecules could also serve as markers for monitoring meat quality.
Introduction
Raising systems have become a more serious issue, particularly in terms of animal welfare [1]. The strategies to increase the production rate to meet the growing demand for chicken meat have led to unintentional negative effects such as muscle abnormalities and increased susceptibility to stress-induced myopathy in chickens [2]. However, the increasing health consciousness amongst consumers has increased the interest in environmentally friendly animal products produced on natural or organic farms. The organic raising system (OR) is a poultry management system where the birds are fed only with organic feed (produced without chemical fertilizers and pesticides) and allowed to grow and express their natural behaviors without the use of chemicals such as antibiotics and other drugs [3]. All ORs are free-range systems, as they allow free outdoor access; however, the reverse is not true, as the free-range systems that are not OR use general feed, medications, and chemicals [4]. The OR has gained increasing interest worldwide because it is environmentally friendly, its products are free from chemical residues, and it follows a high standard for the welfare of birds [3]. Chickens raised in OR have a high nutritional value in terms of protein, total collagen, and total omega-3 polyunsaturated fatty acid (n-3 PUFA) contents in meat [1,[5][6][7]. However, the breeds suitable for OR need to be tolerant, resilient, adaptable, capable of utilizing quality balanced feed [8][9][10], and incur a low cost of production. Therefore, not all breeds are adaptable, but the slow-growing chickens seem to be suitable for the OR. The Korat chicken (KRC) is one such slow-growing chicken with an average daily gain (ADG) of 19.8 to 21.0 g/d, and it takes about 70 d to reach the marketable body weight (BW) of~1.2 kg [11]. It is a hybrid chicken obtained from the cross of Suranaree University of Technology (SUT) as a dam line and the Thai native chickens (Leung Hang Khao, LHK) as a sire line. KRC is recognized for its meat quality and holds promise of being an efficient occupation for Thai smallholder farmers as well as smallholder farmers across Southeast Asia in the near future. However, it has not gained the expected levels of scientific interventions. Furthermore, though Southeast Asia is one of the largest chicken-producing regions, the efficiency of OR farming in Southeast Asia has not been explored fully. Therefore, to unveil the efficiency of OR for tropical areas, particularly Southeast Asia, it is essential to investigate the effects of OR on the growth performance and meat quality of slow-growing chickens compared to the conventional raising system.
Several studies have investigated the effect of OR on the growth performance and meat quality [12][13][14][15] of chickens; however, they have reported inconsistent results attributable to the differences in the breed, feed, and experimental sites used by them. Moreover, fatty acids (FA), particularly n-3 PUFA such as docosahexaenoic acid (DHA) and eicosapentaenoic acid (EPA), are expected to be higher in the meat of chickens that were raised in OR, because organic chickens can get α-linolenic acid (ALA) from pasture, which is the precursor of n-3 PUFA [14]. Nevertheless, some studies have reported contradictory results for the effects of OR on the fatty acid profiles of meat [16].
The raising system affected lipid oxidation [17], resulting in a change in secondary protein structure (such as α-helix and β-sheet) [18], and may alter the biochemical compositions in meat, or the contents of glycogen, which have been shown to play major roles in determining meat quality [19][20][21]. Therefore, it is expected that monitoring the changes in the biochemical composition of meat with high sensitivity could help understand and explain the effect of the raising system on meat quality. Fourier transform infrared microscopy is a powerful technique used for biological analysis and monitoring the changes in biochemical compositions at the molecular level [22]. However, the global light source used for this technique does not have enough power to penetrate the cell and a lower ability to detect changes compared with light sources from synchrotrons radiation [23].
Synchrotron Radiation-Fourier Transform Infrared (SR-FTIR) spectroscopy is a highly sensitive and powerful technique because it is extremely intense (hundreds of thousands of times more intense than that from conventional X-ray tubes) and highly collimated [24]. In addition, this technique is fast, inexpensive, and non-destructive compared to conventional methods. It can provide unique information and a high performance to detect biochemical compounds at the molecular level, such as proteins, lipids, and glycogen [25]. Moreover, the efficiency of FTIR spectroscopy to investigate the change in biochemical composition in KRC meat is evident from previous studies. For instance, Poompramun et al. [26] successfully evaluated the differences in the biochemical composition of KRC thigh meat from the high and low feed conversion ratio (FCR) groups. We hypothesized that SR-FTIR could help monitor the quality traits in meat obtained from different raising systems.
The present study investigates the effects of OR on the growth performance, physicochemical properties, and biochemical composition of meat using SR-FTIR in KRC. Furthermore, in this study, KRC was used-mainly to identify its potential as one of the representatives of slow-growing chickens in OR and represent itself.
Ethics Statement
In the present study, all procedures were approved by the Ethics Committee on Animal Use of SUT, Nakhon Ratchasima, Thailand (user application ID: U1-02633-2559).
Birds, Experimental Design, and Diets
This study was conducted from January to April 2018. The experimental site was located at the coordinates latitude 14 • 53 13 N and longitude 101 • 59 42 E. The temperatures varied from 20.0-35.5 • C with an average relative humidity of 76% (Nakhon Ratchasima Meteorological Department, Nakhon Ratchasima, Thailand). Before this study, heavy metal levels in water and soil were assessed, and the experimental area was not treated with pesticides or herbicides. All animals were raised according to the National Bureau of Agricultural Commodity and Food Standards [27].
Three hundred and sixty (1-day-old) mixed-sex Thai native crossbred chicks, KRC, were produced at the Poultry Research Unit of the University Farm. The chicks were vaccinated against Marek's disease on d 1, Newcastle disease and infectious bronchitis on d 7 and 21, and Gumboro disease on d 14. After hatching, the chicks were randomly allocated to two different raising systems (considered treatment groups) using a completely randomized design-each treatment group comprised 6 pens and 30 chickens per pen. In the CO (control group), chickens were housed in an indoor pen (5 birds/m 2 ), fed with mixed feed derived from commercial feedstuffs, while the chickens in the OR group were fed with mixed feed derived from certificated organic feedstuffs in an indoor pen (5 birds/m 2 ) with free access to an outdoor Ruzi pasture (4 m 2 /bird) from d 21 of age to slaughter age (d 84). Ruzi pasture, planted from seed and grown by irrigation, is very palatable and tolerates moderately heavy grazing, and the chickens were allowed to eat this grass daily. The organic feed content in the starter, grower, and finisher diets fed to OR chickens were 96.20%, 96.65%, and 96.85%, respectively. The experimental diets used in both raising systems are shown in Table 1-the energy and protein levels of the diets were adjusted to the same level. The chickens had ad libitum access to feed and water throughout the experimental period.
Growth Performance and Carcass Composition
Growth performance was estimated by assessing the BW and feed intake (FI) every week, and subsequently, BW gain (BWG) and FCR were calculated. The percentages of eviscerated carcasses and abdominal fat were measured as a ratio of the live chicken's BW after feed withdrawal. The percentages of the breast, thigh, and drumstick were estimated as the percentages of the chilled carcass weight.
Sample Collection
At slaughter age (d 84), 24 chickens from each group were randomly selected and electrically stunned, and their feathers were removed with a machine. Then, they were scalded and eviscerated manually. The carcass composition and meat quality were measured in 12 chickens per treatment (6 males and 6 females). The proximate composition, FA profile, cholesterol content, and nucleotide content were estimated from the breast and thigh meat samples obtained from the remaining 12 chickens (6 males and 6 females). The samples were stored at −20 • C till analyses.
Drip Loss Measurement
The breast and thigh meat samples were cut into 1.5 cm (width) × 3.0 cm (length) × 0.5 cm (thickness) pieces from the same position after chilling for 24 h. Then, the cut meat samples were hung inside a chilled storage room at 4 • C for 24 h. The drip loss was estimated by using the following formula: Drip Loss (%) = (Weight be f ore storage − Weight a f ter storage ) Weight be f ore storage × 100
Cooking Loss Measurement
After thawing the breast and thigh samples overnight, they were weighed and boiled in a water bath in open plastic bags until an internal temperature of 80 • C was reached. Cooking loss was calculated as follows: Cooking Loss (%) = (Weight be f ore boiling − Weight a f ter boiling ) Weight be f ore boiling × 100
Warner-Bratzler Shear Force Measurement
A texture analyzer (TA-XT2, Texture Technologies Corp., Scarsdale, NY, USA) was used to determine the shear force of the cooked breast and thigh samples. At least two subsamples of 2.0 cm (width) × 3.0 cm (thickness) × 0.5 cm (length) were cut parallel to the muscle fibers. The crosshead speed was set at 20 cm/min, and the shear force was calculated following the method described by Wattanachant et al. [28].
Morphological Analysis
Tissue samples were fixed in 10% formalin solution for 24 h at room temperature, dehydrated, and embedded in paraffin wax. Tissue sections (3 µm) were stained with hematoxylin and eosin (H&E). The changes in muscle morphology were visualized using a light microscope (Olympus CX21, Hicksville, NY, USA) and the ZEN software (Axis Cam ERc5s-Zen lite, 2012). The ImageJ program was used for muscle fiber diameter analysis modifying the procedure described in a previous study [29].
Proximate Analysis
Proximate analysis of meat was performed following the standardized method of the Association of Official Analytical Chemists [30]. Briefly, 2 g breast and thigh meat samples were both dried at 102 • C for 15 h to estimate the moisture content. The crude protein (CP) percentage was determined using the Kjeldahl method (VAPO45, Gerhardt Ltd., Idar-Oberstein, Germany), and the total crude fat content was determined following the protocol of Jeon et al. [31].
Fatty Acid Profile Measurement
Total lipids were extracted from breast and thigh samples. Briefly, 5 g breast and thigh meat samples were both dissolved in 90 mL chloroform-methanol (2:1, v/v), and total lipid was extracted following the method described in a previous study [32]. Subsequently, fatty acid methyl esters (FAMEs) were prepared by methylation following the procedure reported by Metcalfe et al. [33]. The FAMEs were analyzed using gas chromatography (Hewlett-Packard 7890A; Agilent Technologies, Santa Clara, CA, USA) fitted with a capillary column (SP 2560, Supelco Inc., Bellefonte, PA, USA, 100 m × 0.25 mm i.d., 0.20-µm film thickness) and a flame ionization detector. Helium was used as the carrier gas at a flow rate of 0.95 mL/min. The temperatures of the injector and detector were set at 260 • C. The injector and detector temperatures were set to 260 • C. The oven temperature was programmed to increase from 70 • C to 175 • C at a rate of 13 • C/min, and then rise to 240 • C at a rate of 4 • C/min.
Total Collagen Content Measurement
Total collagen content was estimated following the method described by da Silva et al. [34] with some modifications. Briefly, 50 mg breast and thigh meat samples were both hydrolyzed in 1 mL 7 M NaOH in an autoclave at 121 • C for 40 min. Sulfuric acid (3.5 M) was used to neutralize the hydrolyzed samples to a pH of 7. Then, the neutralized samples were filtered and mixed with chloramine T solution and Ehrlich's reagent. Afterward, the absorbance at 550 nm (Genesys 10S UV-VIS, Thermo Fisher Scientific, Madison, WI, USA) was measured using hydroxyproline (Sigma-Aldrich Co., St. Louis, MO, USA) as a standard. A coefficient of 7.25 was used to calculate the total collagen content [35]. The collagen content was expressed in mg of collagen per g of meat.
Nucleotide Content Measurement
To extract the nucleic acids, 5 g breast and thigh meat samples were both mixed with 30 mL ice-cold 7.5% perchloric acid and homogenized for 30 s. Next, 10 mL ice-cold 7.5% perchloric acid was added and centrifuged at 2000× g at 4 • C for 5 min. The solution was then filtered through a filter paper (No.1, Whatman International Ltd., Maidstone, UK). The filtrate (1 mL) was analyzed using high-performance liquid chromatography (HPLC) Animals 2022, 12, 570 6 of 20 (HP 1260, Agilent Technologies, Inc., Santa Clara, CA) fitted with a Hypersil ODS C18 column (3 µm, 150 mm × 4.6 mm) (Thermo Scientific, Waltham, MA, USA). The analytical conditions for HPLC were set following Kim et al. [36] with some modifications. The peaks of the individual nucleotides were identified using the retention times estimated for the standards: inosine-5 -monophosphate (IMP) and guanosine-5 -monophosphate (GMP) (both obtained from Sigma, St. Louis, MO, USA), and the concentration of each nucleotide was estimated from the peak area of the individual nucleotides.
Cholesterol Measurement
The cholesterol content in the meat samples was estimated by gas chromatography following the method described by Rowea et al. [37] with some modifications. The α-cholesterol was used as the internal standard. The gas chromatograph fitted with a flame ionization detector equipped with an HP-5 column (30 m × 0.32 mm; film thickness, 0.22 µm; Agilent Technologies, Palo Alto, CA, USA) was used for the analysis. The injection port and detector temperature were set at 260 • C and 255 • C, respectively. Cholesterol was identified by comparing the relative retention time of the sample with that of the standard (Cargo Erba Reagents, Milan, Italy).
Sample Preparation
Breast samples were cut into 1 cm × 1 cm pieces and placed in an aluminum foil block filled with optimal cutting temperature (OCT). Subsequently, the cut samples were completely embedded in OCT and immediately fixed in liquid nitrogen. The breast samples were then cut into sections using a cryostat (micron/HM 525) until the region of interest was reached. The optimized thicknesses of the tissue sections were 6 µm for infrared measurement. The breast sample sections were then kept in a desiccator with a vacuum pump for 30 min.
SR-FTIR Spectra Measurement
The biochemical composition of the samples was analyzed using SR-FTIR spectroscopy [38]. Spectral data were collected using the infrared microspectroscopy beamline BL4.1 IR Spectroscopy and Imaging at the Synchrotron Light Research Institute (SLRI, Nakhon Ratchasima, Thailand). Spectra were obtained using a Vertex 70 FTIR spectrometer (Bruker Optics, Ettlingen, Germany) coupled to an IR microscope (Hypersion 2000, Bruker), equipped with a liquid nitrogen cooled MCT detector. The data were collected over the 4000 to 800 cm −1 measurement range. The measurement was performed in mapping mode with an aperture size of 10 µm × 10 µm and acquisition of 64 scans with a spectral resolution of 4 cm −1 . The software OPUS 7.2 (Bruker Optics Ltd, Ettilngen, Germany) was used for the derivation of the spectra and instrument control, and the results were analyzed with the CytoSpec software.
Samples from CO and OR (12 samples per group) were used to investigate the changes in the biochemical composition of meat. First, the original spectra were averaged to obtain a total of five spectra, followed by the second derivation at 13 smoothing points, and the vector was normalized using the Savitzky-Golay method in Unscrambler X software (version 10.1, Camo Analytics, Oslo, Norway) to account for the effects of varying sample thickness.
Carbohydrate and
Glycogen [40] 2.4.4. Curve Fitting for the Amide I Band The peak positions and band shapes were selected for curve fitting, which examines the area of overlapping peaks of the amide I band (1700 to 1600 cm −1 ) in the FTIR spectra using a nonlinear least square approach based on Gaussian and Lorentzian functions. The fitting parameters, such as beta-sheet (1645 to 1620 cm −1 ), alpha-helix (1640 to 1650 cm −1 ), beta-turn (1685 to 1675 cm −1 ), and anti-parallel (1695 to 1685 cm −1 ) were measured.
Statistical Analyses
The significant differences of the mean values of growth performance traits, carcass composition, breast meat quality, biochemical compositions, and FA content of KRC meat between CO and OR were analyzed by t-test using SPSS software (version 16.0; SPSS Inc., Chicago, IL, USA). All data are expressed as mean ± SD, and a p-value of < 0.05 was considered significant.
The interaction of the spectral data matrix between CO and OR chicken samples, meat quality, n-3 PUFA content, secondary protein structures, and biochemical compounds (spectra intensity from 3000 to 1000 cm −1 ) from FTIR was generated. The clustering of the variables was analyzed using principal component analysis (PCA). The relationships between variables and sample properties were identified using biplot obtained by the calculations from a two-dimensional scatter plot of PCA with the dominant spectral band of the different variables.
The correlation between meat quality, n-3 PUFA content, secondary protein structures, and biochemical compounds for each cluster of the control and organic chicken samples in the data matrix were weighted using an SD weighting process and calculated using PCA, after which a biplot correlation between variables was created using multivariate analysis.
Growth Performance and Carcass Yield
The growth performance and carcass yield of chickens are shown in Tables 3 and 4, respectively. It was observed that the final BW of the OR and CO chickens did not differ (p > 0.05), whereas the FI and FCR of the OR chickens were lower than those of CO (p < 0.001). The results were inconsistent with our hypothesis that chickens reared in OR would be exposed to fluctuating temperature and increased activity in the yard requiring higher energy, consequently leading to decreased BW and increased FCR. Our hypothesis aligns with the findings of Mattioli et al. [46], who reported that exercise behavior is negatively correlated with the performance of chickens, as high movement can increase energy metabolism and decrease their growth performance. Although several studies related to the effect of OR on growth performance have been reported [12,47,48], the results of these studies, including those of the present study, show inconsistency, which could be attributed to the differences in the rearing environmental factors, including light intensity, photoperiod, temperature, breed of chicken, diet, forages, insects, and worms found in pasture [20]. Likewise, several studies have shown that the raising environment affects the quality of grass [49], natural diet [50,51], and diversity of microorganisms in a specific area [52]. Furthermore, in concordance with the results of previous studies, the reduced FI of OR chickens could also be attributed to their free access to natural diets from the rearing environment [10,53]. The lower FCR in OR chickens than in CO chickens with no significant differences in their BW could be due to the enrichment of the digestive tract of OR chicken with some beneficial microorganisms, which might have contributed to the activation of the beneficial enzymes, consequently leading to increased utilization of protein or carbohydrate from grass [54]. However, as the present study was aimed to identify the effects of raising systems on growth performance, we did not explore the precise role of different feed sources such as natural or commercial diets on gut microbiota composition. Therefore, further in-depth studies are needed to understand the effect of OR on the gut microbiota of chicken. Furthermore, the findings demonstrated no differences (p > 0.05) in carcass yields of the OR and CO chickens, whereas the yield of abdominal fat in chickens from OR was lower (p = 0.029) than that of the chickens from CO, as reported in previous studies [12,55]. Though it was expected that the carcass yield of chickens in OR would be higher than that of chickens in CO because the formers have more activity during the day, resulting in the process of muscle repair and increased muscle fiber size (hypertrophy), our study, like the study of Castellini et al. [12], did not adhere to the above expectations. This could be due to the high temperature (20.0-35.5 • C) during the experimental period (summer) of this study that might have restricted the chickens to stay close to their house, leading to reduced exercise and motor activities; hence, no gain in muscle mass. In contrast, Comert et al. [9] Animals 2022, 12, 570 9 of 20 have demonstrated a higher amount of abdominal fat in chicken grown in the OR system than those grown in the CO system. This difference could be attributed to the different genotypes and the sex of birds used in the two studies. Taken together, we inferred that the OR chickens are exposed to increased physical activity than the CO chickens, which, though increases the energy metabolism rate and reduces the abdominal fat accumulation, is not sufficient enough to increase the carcass yield. Furthermore, sex and genotype are the other important factors affecting the carcass characteristics and, therefore, should be carefully considered [9,56].
Physicochemical Properties of Chicken Meat
The effects of the raising system on the biochemical composition of OR chicken meat are presented in Table 5. The results demonstrated that except for protein and total collagen content, the raising system had no effects (p > 0.05) on moisture, cholesterol, fat, IMP, and GMP contents in breast and thigh meat. However, the protein and total collagen contents were higher (p < 0.05) in OR meat than in CO meat. IMP and GMP are key compounds contributing to flavor [57]; in addition, they participate in energy metabolism and ensure energy supply to cells. IMP is generated from the process of adenosine triphosphate (ATP) consumption [58], and ATP is produced when an animal has any activity [58,59]. Moreover, IMP can be converted to GMP [60]. Considering these, it can be inferred that the physical activities might not be sufficiently different in OR chicken than the CO chicken; therefore, no significant differences were observed for IMP, GMP, cholesterol, and crude fat between OR and CO chicken meat samples.
On the contrary, the increased protein and collagen contents suggested that the access to the outdoors increased the physical activity in OR chickens, which was sufficient to alter these components, and indicated that these two traits could be highly sensitive to physical activities. In concordance with the above speculation, Miller et al. [61] have reported that the rate of skeletal muscle collagen and sarcoplasmic protein synthesis increased markedly and rapidly after exercise. Moreover, the results of the present study are congruent with those of the previous studies [5,6,62]. Mikulski et al. [62] reported that the highest protein content was detected in the meat of chicken raised in outdoor access, about 12 h a day, from d 21 to d 64. It has been shown that organic chickens which forage on pasture 12 h daily, (depending on the condition each day) exhibited greater activity from d 21 to d 84, resulting in more type IIA muscle fibers, and were able to synthesize more protein [63]. Collectively, the findings of the present study and those of the previous studies suggest that rearing chickens with outdoor access could increase CP and total collagen content in meat.
The results for the physicochemical properties of meat shown in Table 6 indicate that raising systems had no significant effect on most traits, except for shear force and color, whose values were higher (p < 0.001) in OR chicken than in CO chicken. Table 6. Effects of organic raising system on breast meat quality of Korat chicken at 84 d of age (mean ± SD). Generally, the ultimate pH is largely determined by the initial glycogen storage in the muscle, and the decline in muscle pH is related to glycolysis activity under anaerobic conditions [64], wherein lower pH is related to higher drip loss and cooking loss [65]. This could be because a decline in the muscle pH causes a reduction in the net charge of muscle protein and charged protein sites for binding of water molecules, resulting in greater water and nutrient losses [66]. However, in this study, the raising systems demonstrated no effect on pH, drip loss, and cooking loss. These results suggested that the energy expenditure might not significantly affect the rate of glycolysis in the chickens from both raising systems.
Item
Shear force and muscle diameter indicate tenderness [67]. Increased shear force is the consequence of higher protein and collagen levels in the meat sample. Furthermore, it is known that chicken breast meat is composed of type IIB muscle fibers [59], and prolonged exercise training can induce the transition of type IIB muscle fibers to type IIA muscle fibers. The latter fiber type has a high capacity to generate ATP by oxidative metabolic processes. Therefore, it requires more oxygen to maintain its activity and induce protein synthesis, leading to increased muscle diameter [68,69]. In this study, the muscle diameter of OR chicken meat was slightly larger (p = 0.056) than CO chicken, which indicated that the OR chickens have higher movement, though not significant, leading to adaptive changes in the skeletal muscle fiber.
The greater redness (p = 0.004) and yellowness (p < 0.0001) of OR meat and skin observed in this study agree with those reported in the study of Grashorn and Serini [5]. It could be attributed to the consumption of grasses, a major source of carotenoid pigments [16,62,70], as the OR chickens had free access to the grass fields.
Fatty Acid Profile of Chicken Meat
The FA profiles in the breast and thigh meat of OR and CO chickens are shown in Table 7. No differences in saturated fatty acids (SFA), monounsaturated fatty acids (MUFA), and PUFA content of KRC meat were detected in the different raising systems (p > 0.05). In contrast, the proportion of total n-3 PUFA in breast and thigh meat of chickens raised under OR was higher (p = 0.01) than in those raised under CO. Moreover, the DHA (C22:6n-3) content was higher (p = 0.01) in breast meat. However, the ratio of n-6 to n-3 PUFA was lower in the breast (p < 0.001) and thigh (p = 0.02) of OR chickens. It has been shown that FA from the diet strongly influences the amount of FA in meat [71]. Similarly, grass intake has been shown to increase antioxidants content in plasma and consequently decrease FA oxidation [45]. In addition, fresh grass contains 50-75% ALA [72], a precursor of long-chain n-3 PUFA. It can be converted to EPA (C20:5n-3), docosapentaenoic acid (DPA, C22:5n-3), and DHA (C22:6n-3) through the biochemical processes of elongation and desaturation [14]. Moreover, previous studies reported that slow-growing chickens have higher expression of FADS1 and FADS2 genes involved in n-3 PUFA and n-6 PUFA metabolism [73] and can maintain their oxidative stability during their activity than fast-growing chickens [46]. Considering these, the increased level of DHA in KRC breast meat could be attributed to the influence of the consumption of pastures by the OR chickens. At the same time, the non-increase in DHA content in the thigh meat could be explained by the different lipid compositions in breast and thigh meat. The DHA content is preferentially stored in the form of phospholipids than triglycerides. These results are consistent with Bou et al. [74], who reported that the ratio of phospholipids to triglycerides is higher in breast meat than in thigh meat. Table 7. Major fatty acid profiles (g/100 g total FA) of skinless breast and thigh meat from organic chickens (mean ± SD). Furthermore, a lower ratio of n-6 to n-3 PUFA in the meat of OR chickens owing to increased n-3 PUFA levels could also be due to the competition of FA for desaturase and elongase enzymes [75]. In concordance, Lopez-Ferrer et al. [76] have also reported that the high levels of n-3 PUFA intake may have reduced the desaturase and elongase enzymes of the precursors of n-6 PUFA, leading to low n-6 to n-3 PUFA in organic KRC meat. Interestingly, OR chicken showed a ratio of n-6 to n-3 lower than 10, which is beneficial for human health [77].
Fatty
Dal Bosco et al. [10] reviewed several studies and reported an interaction between the genetic strain with movement, intake of antioxidants, the antioxidant capacity of the body, plasma, and fatty acid profile meat. The study demonstrated that selected strains, higher kinetic activity, and the less controlled environmental conditions exacerbate oxidative status and fatty acid profile of the meat. Therefore, this relationship must be analyzed in future studies. Measurement of enzyme activity and expression of genes involved in FA metabolisms, such as FADS1 and FADS2, and antioxidant status, to obtain a solid and sound knowledge to explain FA accumulation in organic KRC as a representative of slow-growing chickens on OR are needed.
Changes in Biochemical Profile and Secondary Protein Structure in Breast Meat
The average original and second derived SR-FTIR spectra in the fingerprint region of wavenumbers from 3000 to 900 cm −1 obtained from CO and OR are shown in Figure 1A,B, respectively. The difference in the average spectra of breast meat samples from CO and OR detected in the ranges of 2946 cm −1 and 2885 cm −1 represent lipid; 1700 cm −1 and 1672 cm −1 represent protein (amide I) and 1139 to 955 cm −1 represent carbohydrate and glycogen, respectively. The areas under the peaks in these regions were integrated and calculated, revealing differences in the CO and OR spectra ( Table 8). The integral areas of lipid (C-H stretching), amide I (C=O stretching), amide II (C-N stretching vibrations in combination with N-H bending), and glycogen regions were greater (p < 0.05), whereas no significant difference (p = 0.061) was found in amide III in OR compared to those in CO. On the contrary, the Figure 1. Average SR-IR spectra of original (A) and second derivative (B) from KRC chicken breast meat between control and organic raising systems. Infrared spectra were collected in the spectral range of 4000 to 900 cm −1 , resolution 4 cm −1 based on 300 spectra per treatment. 3000 to 2800 cm −1 (CH stretch of lipid); 1740 cm −1 (C=O ester of lipid); 1700 to 1600 cm −1 (amide I); 1600 to 1500 cm −1 (amide II); 1338 cm −1 (amide III); and 1250 to 900 cm −1 (carbohydrate and glycogen) parameters used for the collected spectra.
The areas under the peaks in these regions were integrated and calculated, revealing differences in the CO and OR spectra ( Table 8). The integral areas of lipid (C-H stretching), amide I (C=O stretching), amide II (C-N stretching vibrations in combination with N-H bending), and glycogen regions were greater (p < 0.05), whereas no significant difference (p = 0.061) was found in amide III in OR compared to those in CO. On the contrary, the result of amide III obtained from the t-test did not agree with the result of the loading correlation, as shown in Figure 2B. These differences are due to the fact that PCA consists of the analysis of data sets containing imprecise measurements, which only extracts the maximum variance and important information from the data set. It presents the result as a set of summary indicators known as principal components showing the differences in the data matrix [26]. The amide I band represents different types of secondary protein structures such as α-helix, β-sheet, β-turn, and antiparallel β-sheet, which are related to meat quality [78]. Here, we examined the secondary structures of proteins in the region of the amide I band, where we found a significant difference and calculated the ratio of their relative contents by curve fitting ( Table 9). The results revealed no differences (p > 0.05) in secondary protein structure between the OR and CO chickens, except that the β-sheet of OR chicken meat was slightly lower (p = 0.066) than that of CO. A possible reason for this could be that the different fat and FA profile between the two groups could have altered the rate of lipid oxidation leading to different conformational changes in the secondary protein structures [18].
Correlation Loading Plot of FTIR Spectra with the Biochemical Compounds and Quality of Breast Meat from Different Raising Systems
To elucidate the relationship between biochemical compounds and the quality of breast meat (Figure 2A) from the two raising systems, we combined data from FTIR spectra related to biomolecules and physicochemical properties of chicken meat and then used PCA to classify the CO and OR groups. As shown in Figure 2A, the meat of chickens from different raising systems was separated, representing 57% of the total variability of all data sets.
The correlation loading plot in Figure 2B shows traits, shear force, meat color (redness; a*, yellowness; b*), skin color (lightness; L* s ), CP, lipid, collagen, carbohydrate and glycogen, amide I, amide II of protein, and amide III of collagen located in the outer circle areas, which explained over 50% of the variance between the two groups that had significant correlations among these traits.
structures such as α-helix, β-sheet, β-turn, and antiparallel β-sheet, which are related to meat quality [78]. Here, we examined the secondary structures of proteins in the region of the amide I band, where we found a significant difference and calculated the ratio of their relative contents by curve fitting ( Table 9). The results revealed no differences (p > 0.05) in secondary protein structure between the OR and CO chickens, except that the β-sheet of OR chicken meat was slightly lower (p = 0.066) than that of CO. A possible reason for this could be that the different fat and FA profile between the two groups could have altered the rate of lipid oxidation leading to different conformational changes in the secondary protein structures [18]. The negative loading plot in Figure 2B shows the shear force, meat color (a* and b*), CP, collagen content, amide I of protein (1700 to 1600 cm −1 ), amide II of protein (1480 to 1575 cm −1 ), amide III of collagen (1229 to 1310 cm −1 ), carbohydrate and glycogen (1200 to 1000 cm −1 ), and lipid (2955 to 2800 cm −1 ) were positively correlated with negative score plot from chicken breast meat in the OR group. Although the amide I of β-turn (1670 to 1680 cm −1 ) was in the outer circle, its position was close to the PC-2 axis of the loading plot, meaning that they could not be used to distinguish breast meat from CO and OR. On the contrary, the positive loading plot showed that skin color (lightness; L* s ) was positively correlated with the positive score plot of chicken breast meat from CO. The results revealed that the raising system could differentiate meat properties.
The PCA results confirmed the results shown in Tables 4 and 5. In addition, glycogen and lipids identified by SR-FTIR were higher in the breast meat of OR chicken than those in CO chicken. This could be explained by the fact that organic chickens have high activity and more glycogen storage in their skeletal muscle [79].
Furthermore, under excess glycogen stores, glucose can be converted to fat, which is stored in the muscle by de novo lipid synthesis [80]. Therefore, we analyzed the relationship between raising systems and FA composition in chicken meat using PCA. As shown in Figure 2C, the FA profiles in chicken breast and thigh meat from different raising systems were separated with 50% of the total variability of all data sets. The correlation loading plot in Figure 2D shows the traits, total SFA, PUFA, linoleic acid (LA; C18:2n-6), arachidonic acid (C20:4n-6; AA), and n-3 PUFAs such as ALA and DHA located in the outer circle areas, which explained over 50% of the variance between the two groups that had significant correlations among these traits.
The negative loading plot in Figure 2D shows that PUFA, n-3 PUFA in the breast and thigh, ALA, SFA, and LA in thigh and DHA in breast meat were positively correlated with the negative score plot from chicken meat in the OR group. The positive loading plot in Figure 2D shows that the ratio of n-6 to n-3 PUFA was positively correlated with the positive loading plot of CO but negatively correlated with the negative score plot of chicken meat from OR.
The PCA results confirmed some FA components in chicken meat from the OR, as shown in Table 7. Moreover, the PCA result explored the correlation between OR and amount of SFA in thigh meat, which could be attributed to the FA composition of the feed, especially Ruzi grass that contains 22.91% SFA. In addition, the analysis revealed higher contents of PUFA and ALA and a lower content of n-6 PUFA in OR. These results further supported the conclusion that feed plays an important role in modifying the FA profile of chicken meat. Furthermore, it has been shown that high amounts of PUFA and ALA in Ruzi grass can modify PUFA and ALA in meat, while an increased intake of PUFA, especially FA from the n-3 PUFA group, can reduce n-6 PUFA such as LA in breast and AA in thigh meat [81].
Conclusions
In conclusion, our results reveal that OR has no negative effect on the growth performance of slow-growing chickens. The study shows that the OR system has a positive effect on the meat characteristics, especially meat color and texture, biochemical compounds such as proteins (amide I and amide II), total collagen (amide III), and beneficial FA (PUFA, DHA, and ALA), which determine the nutritional value of meat. The findings of this study demonstrate the potential of OR for commercial adoption in tropical regions such as Southeast Asia. Furthermore, the study demonstrates the efficiency of SR-FTIR to determine the differences in the biochemical compounds, which could serve as markers to monitor meat quality traits. Collectively, these findings provide insights into the relative roles of raising systems in KRC chickens and can help producers to produce nutritionally rich quality products while maintaining animal welfare standards. Funding: This work was supported by the Suranaree University of Technology (SUT), the Thailand Science Research and Innovation (TSRI), and the National Science, Research and Innovation Fund (NSRF) (project code 90464). We also thank the National Research Council of Thailand (NRCT) (project code SUT3-303- 59-12-24), and the Center of Excellence on Technology and Innovation for Korat Chicken Business Development which was granted by SUT (project code CoE3-303-62-60-02) for their financial support.
Institutional Review Board Statement:
The experiments were conducted with appropriate management to ensure that unnecessary discomfort to the animals was avoided. The Ethics Committee on Animal Use of the Suranaree University of Technology (SUT), Nakhon Ratchasima, Thailand, approved all the procedures used in this present experiment (user application ID: U1-02633-2559).
Data Availability Statement:
The data presented in this study are available on request from the corresponding author. | 2022-02-27T16:20:23.603Z | 2022-02-24T00:00:00.000 | {
"year": 2022,
"sha1": "0aaf94774e05e88acf4a99ab033bbc1a20a24bc9",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-2615/12/5/570/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d13c7dbd9ab7dc64ba189f751422d0f4d13d3284",
"s2fieldsofstudy": [
"Agricultural and Food Sciences",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
252224750 | pes2o/s2orc | v3-fos-license | Life factors acting on systemic lupus erythematosus
Systemic lupus erythematosus (SLE) is a highly heterogeneous autoimmune disease that primarily affects women. Currently, in the search for the mechanisms of SLE pathogenesis, the association of lifestyle factors such as diet, cigarette smoking, ultraviolet radiation exposure, alcohol and caffeine-rich beverage consumption with SLE susceptibility has been systematically investigated. The cellular and molecular mechanisms mediating lifestyle effects on SLE occurrence, including interactions between genetic risk loci and environment, epigenetic changes, immune dysfunction, hyper-inflammatory response, and cytotoxicity, have been proposed. In the present review of the reports published in reputable peer-reviewed journals and government websites, we consider the current knowledge about the relationships between lifestyle factors and SLE incidence and outline directions of future research in this area. Formulation of practical measures with regard to the lifestyle in the future will benefit SLE patients and may provide potential therapy strategies.
Introduction
Systemic lupus erythematosus (SLE) is a highly heterogeneous autoimmune disease that primarily affects women, especially in the reproductive age. The prevalence rate of SLE worldwide is about 20-70 per 100,000 general population (1,2). The exact etiology of SLE remains unclear, but genetic risk loci, such as N-acetyltransferase 2 (NAT2) slow acetylator genotype, and environmental factors are crucial in the development of susceptibility to SLE (3, 4). Although many SLE susceptibility genes have been identified recently, gene therapy approaches remain a distant prospect from the point of view of the clinical treatment (5). Furthermore, the significant side effects of high-dose immunosuppressive therapy for SLE, such as osteoporosis, hypertension and infection, have caused much concern (4, 6). Thus, the knowledge of environmental and lifestyle risk factors, especially those that can be controlled, may offer new promising therapeutic strategies for SLE.
Here we review evidence from reports published in reputable peer-reviewed journals and government websites and consider recent advances in our understanding of the links between lifestyle factors with SLE susceptibility and development. In particular, we analyze the effects of the 1) diet including N-3 polyunsaturated fatty acids (N-3 PUFA), N-6 PUFA, calorie restriction, vitamins, as well as 2) other lifestyle factors, including cigarette smoking, ultraviolet radiation exposure, consumption of alcohol and caffeine-rich beverages, etc. Implementation of practical measures with regard to these lifestyle factors will benefit SLE patients and may provide potential therapy strategies.
Diet effects on SLE N-3 PUFA and N-6 PUFA
In the last thirty years, numerous studies in murine SLE models such as NZBWF1, BXSB/MpJ, and MRL-1pr/1pr mice reported that fish and olive oils containing N-3 PUFA effectively attenuated plasma auto-antibodies, proteinuria, and kidney glomerulonephritis as well as increased lifespan of animals, compared with the phenotypes of mice fed with beef tallow that contained saturated fatty acids, N-6 PUFA, or N-9 monounsaturated fatty acids (N-9 MUFA) (Figure 1) (7-12). Furthermore, an increasing number of human clinical trials demonstrated that consumption of N-3 PUFA had positive effects on autoimmune glomerulonephritis conditions, such as lupus nephritis and others (13-17). Since the earliest clinical trial in 1989, there have been seven major published clinical studies focusing on the relationship between N-3 PUFA and SLE. All but one of the clinical studies reported beneficial effects, including the improvement in endothelial function, disease activity, or inflammatory markers following the implementation of N-3 PUFA in SLE patients (18). A clinical nutritional study of SLE patients found that dietary patterns low in N-3 PUFA and high in carbohydrates positively correlated with the severity of disease activity, adverse serum lipids, and the presence of plaque (19). A double-blind, double placebo-controlled factorial trial in 52 patients with SLE (15) reported a significant decline in SLAM-R score (revised Systemic Lupus Activity Measure) from 6.12 to 4.69 in the subjects receiving eicosapentaenoic acid (EPA)/ docosahexaenoic acid (DHA) compared to those on placebo. In the study carried out by Das and colleagues (20), daily oral supplementation of even moderate EPA and DHA (EPA 162 mg, DHA 144 mg) induced prolonged remission of SLE in ten patients. Furthermore, EPA and DHA also suppressed both T-cell proliferation and the production of inflammatory cytokines.
tumor necrosis factor alpha (TNF-a), transforming growth factor beta 1 (TGF-b1), intercellular adhesion molecule 1 (ICAM-1), and fibronectin. N-3 PUFA increased the production of antioxidant enzymes and down-regulated mRNA expression of CD4 + T cell-associated genes, such as Cd80, Il6, Il10, Il18, Ccl5, Cxcr3, Tnfa, and Spp1, thereby reducing inflammatory response, oxidative stress, and autoimmune reactions in murine SLE models (11, 39-46). In contrast, N-6 PUFA-containing corn oil, safflower oil, and sunflower oil, which all induced the production of plasma auto-antibodies, proteinuria, and glomerulonephritis by increasing mRNA expression levels of the above-mentioned CD4 + T cell-associated genes in the kidney and/or spleen, contributed to the development of autoimmune reactions in NZBWF1 mice (11). The N-6 PUFA precursor was also shown to participate in the inflammatory process in SLE patients in a clinical study (13). However, the precise molecular mechanisms of N-3 PUFA and N-6 PUFA effects in SLE models remain unclear, and further studies are needed to confirm and correctly interpret the results of the published accounts.
Calorie restriction
There have been many studies that examined the association between calorie restriction and autoimmune diseases such as SLE ( Figure 1). Calorie restriction has been shown to alleviate SLE manifestations such as proteinuria, glomerulonephritis, and deposition of immune complexes as well as to prolong the lifespan of lupus mouse models by down-regulating mRNA expression of genes encoding the proinflammatory mediators IFN-a, IL-10, IL-12, TNF-a, NF-kB, and polymeric immune globulin receptor (47-52). This, in turn, reduced lymphoproliferation and antibody production, increased antioxidant defense, and decreased the extent of T lymphocyte shift (53-56). It is known that circulating levels of adipokine leptin markedly decrease with calorie restriction (57). Leptin has pro-inflammatory effects and may inhibit regulatory T cells as well as p romote a uto immune responses ( 58 -6 5) . Hypoleptinemia and deficient leptin signaling led to the expansion of the population of regulatory T cells in NZB × NZW F1 mice (57), and a reduction in the number of Th17 cells in MRL/Mp-Faslpr mice (66), which contributed to the amelioration of SLE lesions. In addition, caloric restriction was also shown to significantly improve fatigue in subjects with SLE in a clinical study (67).
Vitamin D
A large body of evidence in the last decade has suggested that vitamin D deficiency plays a key role in the development of autoimmune diseases such as SLE. Moreover, the degree of vitamin D deficiency in SLE patients correlates with the severity of SLE manifestations ( Figure 1) (68-86). However, a study of a large prospective cohort of women born between 1980 and 2002 indicated that vitamin D consumption did not significantly affect the risk of SLE or rheumatoid arthritis (87). Furthermore, other prospective cohort studies suggested that dietary vitamin D intake during adolescence did not modify SLE risk in adulthood (88). Hiraki et al. suggested the association between dietary vitamin D intake and SLE risk may be misleading, because only 20% of vitamin D comes from food, whereas 80% of vitamin D is generated in the skin following exposure to UVB. Therefore, vitamin D consumption may not accurately reflect the extent of vitamin D deficiency or insufficiency (89). A clinical study conducted in 2017 showed that individuals with vitamin D deficiency are more prone to develop SLE compared with those relatives with SLE (90). In summary, there is a relationship between the degree of vitamin D deficiency or insufficiency and SLE incidence or exacerbation.
Immunomodulatory effects of vitamin D were examined in patients with SLE and it was then shown that 1,25-(OH)2-D3 suppressed the proliferation of activated B cells, decreased the number of memory B cells, and reduced the production of immunoglobulin, which also inhibited the maturation and activation of dendritic cells and reduced the production of IFN-a. In addition, 1,25-(OH)2-D3 also prevented Th1 immune response and simultaneously enhanced Th2 immune response, increased the number of regulatory T cells as well as decreased the numbers of Th1 and Th17 cells. These multiple effects lead to the recovery and maintenance of immune homeostasis, and an overall protective effect in SLE patients (86, 91-104). Although these observations justify the recommendation of vitamin D supplementation in SLE patients, the role of Vitamin D is not fully elucidated (105-107).
Effects of cigarette smoking and consumption of alcohol and caffeine-rich beverages on susceptibility to SLE
Cigarette smoking
Numerous epidemiologic studies revealed that exposure to cigarette smoke is associated with increased risk of SLE ( Figure 2) (108)(109)(110)(111)(112)(113)(114)(115). Furthermore, strong and consistent evidence suggests that current smoking is more risky than previous smoking (116)(117)(118)(119)(120)(121)(122). A study conducted by the Systemic Lupus International Collaborating Clinics/American College of Rheumatology Damage Index that involved 105 patients with SLE with 8.98-year follow-up indicated that smoking exposure may have deleterious effects on lupus morbidity (123). According to a meta-analysis conducted in 2004 that included seven case-control and two cohort studies, there was a modest association between current smoking and risk of SLE, whereas the effect of former smoking was not statistically significant (119). Subsequently, an updated meta-analysis in 2015, which contained 12 published articles encompassing 13 separate studies, found that the odds ratio (OR) values for SLE of current smokers and ex-smokers were 1.56 and 1.23, respectively, compared with the probability of SLE in nonsmokers (121). Recent research focused on cigarette smoking affecting clinical manifestations of patients with SLE has indicated that cigarette smoking was associated with photosensitivity, cutaneous damage, active SLE rash (124-127), higher SLE Disease Activity Index (SLEDAI) score (128), pleuritis, peritonitis, metabolic syndrome (129), neuropsychiatric symptoms (130, 131), vascular necrosis (132), thrombotic events (133-136), cardiovascular disease (137), peripheral vascular disease (138,139), and production of anti-phospholipid antibodies (136). Moreover, smoking lowers the efficacy of medicines used to treat SLE (3, 140, 141). Likewise, a prospective cross-sectional study of Chinese SLE patients performed in 2015 reported that cigarette smoking causes the development and worsening of symptoms in SLE patients, including photosensitivity, nephropathy, proteinuria, compared with those in nonsmokers (after adjustment for age and gender), whereas SLEDAI scores were not significantly different in smokers and non-smokers (142). Taken together, these studies indicated that smoking is associated with increased risk for the development of SLE.
The mechanism whereby smoking affects SLE pathogenesis remains unclear. In recent years, several new lines of evidence have suggested that the effect of smoking in SLE may be modulated by gene polymorphisms and epigenetic changes. The studies of Japanese population by Kiyohara et al. showed that smokers with the N-acetyltransferase 2 (NAT2) slow acetylator genotype were at a significantly higher risk of SLE (OR 2.34, 95% CI 1.21-4.52) compared with nonsmokers carrying the rapid acetylator genotype (143). Moreover, Kiyohara et al. also demonstrated that smokers with rs1061622 T/G in TNFRSF1B that confers an increased risk for SLE (OR 1.56, 95% CI 0.99-2.47) had 49% of the excess risk for SLE resulting from the gene-environment interactions. In addition, although a significant association between the TT genotype of STAT4 rs7574865 and increased risk of SLE (OR 2.21, 95% CI 1.10-4.68) was found in that study, there was no significant interaction between STAT4 polymorphisms and smoking (144). Further, smokers carrying rs4646903 C/C in the CYP1A1 gene that encodes a monooxygenase that generates various reactive oxygen species were also at a significantly increased risk of SLE (OR 9.72, 95% CI 2.73-34.6), as the presence of rs4646903 contributed over 60% excess risk of SLE (145). Therefore, several gene polymorphism-smoking interactions increase the risk of SLE. In addition, cigarette smoking, as a lifestyle factor, may influence DNA methylation patterns and thereby change the expression levels of disease-relevant genes (146-151). In a genome-wide DNA methylation analysis of peripheral blood mononuclear cells by Dogan et al., it was found that methylation levels of genes implicated in inflammatory and immune function pathways were altered by cigarette smoking, which could consequently cause complex illnesses with inflammatory components (152). Notably, there are indications that DNA methylation state may repair after the cessation of cigarette smoking (153,154). However, much more Mechanisms of other lifestyle factors effects on SLE incidence and manifestations. (UVR, ultraviolet radiation; NAT, N-acetyltransferase; IFN, interferon; TNF, tumor necrosis factor; IL, interleukin; CXCL, chemokine (C-X-C motif) ligand; CCL, chemokine (C-C motif) ligand; ICAM, intercellular adhesion molecule; HMG, high-mobility group protein). remains to be done with respect to the elucidation of the interactions between gene polymorphisms and epigenetic changes on the one hand and smoking on the other hand.
Ultraviolet radiation
Ultraviolet radiation (UVR) is an important environmental factor inducing SLE, as demonstrated in various studies of human populations and experimental studies (155) (Figure 2). It plays a crucial role in the pathogenesis of lupus by inducing a proinflammatory environment and leading to abnormal longlasting photoreactivity via inflammatory mediators, such as proinflammatory cytokines, chemokines, and adhesion molecules. UVR exposure upregulates proinflammatory cytokines expression, such as IFN-a, IL-1, IL-6, and TNF-a (156). IFNs increase the expression of proinflammatory chemokines, including chemokine (C-XC motif) ligand (CXCL) 9, CXCL10, and CXCL11, which recruit chemokine (C-X-C motif) receptor 3 effector cells and induce keratinocyte apoptosis (157).
In addition, one study revealed that UVR exposure induced high-mobility group protein B1 (HMGB1) release, which is related to the number of apoptotic cells in patients with SLE. HMGB1 released from apoptotic keratinocytes exerts inflammatory effects through binding to its receptors, resulting in the development of inflammatory lesions in the skin of patients with SLE upon UVR exposure (160).
If UVR is a trigger for SLE onset, glutathione S-transferases (GSTs, detoxification enzymes that protect cells from attack by reactive electrophiles that are produced by certain stressors, such as infection) may play a key role (3). The isoenzyme Mu of GST (GSTM1) is dominantly inherited. A population-based case-control study reported a threefold increased risk of SLE associated with 24 or more months of occupational sun exposure among Caucasian participants with the null GST Mu 1 (GSTM1) genotype (which leads to decreased activity of the GST enzyme). No effect of occupational sun exposure (on SLE risk) was seen in participants with the positive genotype (i.e., with the full activity of the GST enzyme) (161). However, more mechanisms of UVR affecting SLE disease progression need to be discovered and explored.
Consumption of alcohol and caffeinerich beverages
Previously, epidemiological studies showed that there was no significant association between alcohol consumption and SLE (110,(162)(163)(164)(165)(166). However, in the last several decades, several studies have consistently suggested that moderate alcohol consumption was negatively associated with the risk of SLE, irrespective of the type of alcoholic beverage (3, 112,115,167,168). A meta-analysis of six case-control studies and one cohort study published in 2008 revealed that moderate alcohol consumption likely has a protective effect against the development of SLE (169). Furthermore, a case-control study from Japan suggested that consumption of black tea (OR = 1.88, 95% CI 1.03-3.41) and coffee (OR = 1.57, 95% CI 0.95-2.61) increased the risk of SLE (Figure 2) (170). Gene-environment interactions may be implicated in the mechanisms responsible for protective effects of alcohol consumption and SLEaggravating action of caffeine-rich beverages. Kiyohara et al. showed that NAT2 genotype significantly affected the association between SLE risk on the one hand and alcohol and black tea consumption on the other hand (170). Another study that enrolled 505 patients with SLE from the Korean Lupus Network (KORNET) SLE registry between January 2014 and January 2016 showed that current alcohol consumption likely influenced the development of cutaneous damage in patients with SLE (166).
In conclusion, the available evidence reflects that cigarette smoking, caffeine-rich beverages, and UVR may promote the progression of SLE, while alcohol consumption is controversial and needs more research.
Future directions
Modifying lifestyle risk factors could be the basis of potential preventative measures or therapy for SLE in the future. Insights into cellular and molecular mechanisms of negative and positive effects of lifestyle preferences on SLE incidence and manifestations are still being researched. These mechanisms involve gene-environment interactions, epigenetic changes, immune dysfunction, hyperinflammatory response, cytotoxicity, and others. Practical measures with regard to these lifestyle choices in the future will benefit SLE patients and may provide potential therapy strategies.
Conflict of interest
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Publisher's note
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher. | 2022-09-15T13:37:59.200Z | 2022-09-15T00:00:00.000 | {
"year": 2022,
"sha1": "bba275aae2ad78c448e86b01e98aef5cc119325d",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "bba275aae2ad78c448e86b01e98aef5cc119325d",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
233975220 | pes2o/s2orc | v3-fos-license | Selection of Parameters for Accumulating Layer of Solar Walls with Transparent Insulation
One of the strategies to improve the energy performance of buildings may be the use of passive solar systems with transparent insulation. In the article, a numerical model of solar wall (SW) with transparent insulation (TI) obtained using the method of elementary balances is presented. On this basis, numerical simulations of the behavior of SW with a transparent honeycomb insulation made of modified cellulose acetate were performed for 4 different climatic conditions in Europe (Stockholm, Warsaw, Paris, and Rome). For each location, the calculations were carried out for three different TI thickness values (48, 88, and 128 mm), for thermal diffusivity of the accumulating layer (AL) ranging from 4.32 × 10−7 to 8.43 × 10−7 m2/s, and for its thickness ranging from 0.1 to 0.5 m. The purpose of simulations was to select the appropriate material and thickness of AL and TI for the climatic conditions. The following solutions proved to be the most favorable: Stockholm: TI—thk. 128 mm, AL—sand-lime blocks, thk. 25 cm; Warsaw: TI—thk. 128 mm, AL—sand-lime blocks, thk. 27 cm; Paris: TI—thk. 88 mm, AL—solid ceramic brick, thk. 27 cm; Rome: TI—thk. 48 mm, AL—solid ceramic brick, thk. 29 cm.
Introduction
The European Union has imposed an obligation on all member countries to reduce energy consumption, with this obligation being particularly relevant to sectors of the economy characterized by significant energy consumption. As buildings are responsible for approximately 40% of the total energy consumption worldwide, construction industry is a particular type of these sectors [1].
About 35% of the annual energy consumption in residential buildings is used for heating and ventilation, while public buildings use about 45% of energy for this purpose [2]. One of the strategies to improve the energy performance of buildings and reduce their heat demand for heating can be the use of passive solar systems integrated into the external walls of buildings (e.g., Trombe walls) [3,4]. The traditional Trombe wall is a passive solar energy generation system based on indirect gains with the use of a heat accumulating layer (AL). It consists of a massive wall, air layer, and glazing which together form a system capable of absorbing, collecting, and gradually releasing heat into the building. However, very high heat losses are one of the main drawbacks of the Trombe wall in sunless periods and at night [5]. The efficiency of these systems can be improved by using transparent insulation (TI) instead of traditional glazing, which is why a solar wall (SW) with transparent insulation has recently become an interesting design solution for newly constructed energy-efficient buildings and for renovations of buildings to a passive standard [6,7]. TI (characterized similarly to glass by low infrared losses) performs a function identical to that of traditional insulation, i.e., it limits heat losses from the building; however, it additionally enables the transmittance (at the level of about 50% [6]) of solar radiation to the AL. The energy from the solar gains available during the day is stored in the massive were analyzed, as well as two different types of heating: gas and electric. It was found that lower annual primary energy consumption during the life cycle is obtained for a core made of heavier materials and with a lower value of grey energy used for their production. In order to achieve maximum primary energy savings and minimum environmental impact, the core of the Trombe wall must have an optimal thickness. This value depends on the type of heating, and so, in the case of electric heating, the optimum thickness of the brick wall is about 35 cm, and in the case of gas heating about 25 cm.
In the study [27], the heat demand for heating and cooling was calculated and the global warming potential (GWP) was determined for a residential building located in Ancona (Italy) with an unventilated Trombe wall. The GWP indicator was determined for two phases of the facility's life cycle: the pre-utilization phase (it takes into account the purchase of raw material, production of materials, transport, and construction) and the utilization phase (it takes into account the energy needs for heating and cooling). The energy demand was determined using EnergyPlus computer program as the difference in demand between the reference building without the SW and that with it. Three AL material variants were analyzed: concrete, brick, and cellular concrete as well as three core layer thickness cases: 20, 30, and 40 cm. It turned out that during the utilization phase, the energy demand depends to a large extent on the thermal properties of selected material and is the lowest for a Trombe wall made of cellular concrete, while the cooling energy demand is the lowest for a SW with the concrete core. Considering both the pre-utilization and utilization phase, the best overall performance was achieved with the cellular concrete wall whose production cycle has a low environmental impact, and at the same time, high energy efficiency during the utilization phase. The authors stated that reducing the thickness of the SW has two effects: the negative impact on the environment in the pre-utilization phase is lower because the amount of material produced and transported is lower; the energy efficiency in the utilization phase is compromised due to the lower thermal resistance of the system. However, the overall efficiency of the SW increases with the reduction of the wall thickness, due to the dominant influence of the pre-utilization phase.
In the literature available to the authors of this article, there are no works devoted to the issue of selection of appropriate material and thickness of the AL of SWs with TI. Such walls, due to the increased thermal resistance of TI in relation to traditional glazing used in Trombe systems, are characterized by a slightly different way of operation and higher tendency to overheat (on a sunny day the temperature on the absorber may exceed 120 • C [22]). This can lead to a situation in which the TI temperature values go beyond the temperature limits for safe operating conditions of the insulation. The optimal type of wall core material and its thickness depend of course on the latitude and climate which the building is located in [28], which further complicates the considerations.
Numerical simulations are the most commonly used approach for testing effectiveness and sensitivity to selected SW parameters [9,29]. Compared to testing on a scaled model or on actual facilities, numerical calculations are clearly much 'cheaper' in terms of both time and cost. Changes in the SW parameters can be easily introduced into the software and thus provide guidelines for optimal solutions in real life [6]. Various types of calculation models and approximate methods can be used to simulate the behavior of SWs with TI. Programs such as EnergyPlus, ESP-r, and TRNSYS provide the possibility to model SWs dynamically over long periods of time, taking into account the geometry and thermal characteristics of the building, as well as the climate where they are located in [21,30]. For a simplified analysis of the impact of TI walls on the heating or cooling demand of a building, quasi-stationary algorithms such as those presented in the paper [7] or standard [31] can be used. A non-stationary model of heat transfer based on electrical analogies can be used for a shortened analysis of SW behavior during one day [32,33]. Such calculations can predict thermal processes taking place in the wall during days with different insolation and to determine the daily thermal balance of the wall in different weather conditions. A numerical SW model can also be obtained using the finite differences Energies 2021, 14, 1283 4 of 55 in the heat equation, after introducing an appropriate source term associated with the absorption of solar radiation by the absorber [34].
In this work, the method of elementary balances was used to analyze the behavior of the SW with TI. Within this method, differential elements (or elementary volumes) represented by nodal points are distinguished in the considered area. It is assumed that heat capacity and heat sources are geometrically assigned to the nodes. On the other hand, the heat flow resistances are assigned to the segments connecting adjacent nodes. This is a very universal method of creating finite difference equations when it comes to heat flow issues [35]. What is important, the radiative heat transport between the surfaces of TI and AL constituting the boundaries of the air gap can be easily taken into account by the source terms in the energy balance for the elementary volumes including these surfaces. The heat exchange by convection within the air gap is taken into account by increasing the air heat transfer coefficient in proportion to the Nusselt number [32,36]. During the calculation, a constant distance between the nodes was assumed as equal to 4 mm in every wall layer. Such a dense spatial division ensured high accuracy of the conducted simulations. The length of the time step changed during the calculation and was selected in such a way that the standard condition of convergence for the numerical method [35] is met, and at the same time, the temperature does not change by more than 0.1 • C between individual steps in any node. The calculations were made for the whole heating season (from October to April) for SWs with TI oriented south and located in four different places in Europe: Stockholm (Sweden), Warsaw (Poland), Paris (France), and Rome (Italy).
The purpose of the simulations presented in this work was to select the right AL material and thickness as well as TI thickness for different climatic conditions in Europe. For each location, the calculations were carried out for the AL thermal diffusivity ranging from 4.32 × 10 −7 m 2 /s to 8.43 × 10 −7 m 2 /s (every twentieth of the range), the AL thickness changing from 10 cm to 50 cm (every twentieth of the range), and for three different TI thicknesses (48,88, and 128 mm). As a result of the calculations, the energy balance of the analyzed SWs was obtained in individual months of the heating period and in the entire heating period, as well as the time when the SWs act as a heat source in the room. Moreover, the time, in which the temperature resistance (140 • C) of the TI is exceeded, was also determined. On this basis, the authors proposed solutions of SWs with the optimal AL parameters and TI thickness for each of the analyzed locations.
The original elements of the work include contour graphs obtained through the performed simulations, allowing to estimate, for the considered SWs, the value of the thermal balance, the length of the heating time, and the average temperature wave time lag depending on the thickness and thermal diffusivity of the applied AL with the TI of different thickness for 4 locations in Europe. The original element of the conducted analyzes is also the selection of AL parameters, due to the temperature conditions in which the TI under consideration can operate safely. This is an important aspect of the discussed issue, often overlooked in the works in the discussed field.
The objective of this study is overcoming at least some of the barriers to the widespread use of TIs and making it easier for designers to construct optimal SWs for the different climate patterns and different building materials available in particular European regions.
Solar Wall
Within the study, SWs equipped with TI in the form of honeycomb (TIMax CA) are analyzed. The insulation is made of modified cellulose acetate with a density of 16 kg/m 3 and placed between two 4 mm thick panes. This material shows resistance to effects of long-lasting temperature of 100 • C and short-term resistance to temperature of 140 • C [37]. If a duration of short-term thermal resistance is prolonged, the honeycomb becomes brittle. However, if the insulation is not mechanically loaded, the honeycomb retains its structure and thermal insulation properties [38]. The literature on the subject includes studies on the influence of temperature, humidity, and solar radiation on the functional properties of modified cellulose acetate film and TI structures made of it [39,40]. In the work [39], the films were aged in an aging chamber where they were artificially weathered, i.e., they were exposed to the action of UV radiation, and hot and humid air (temperature 65 • C, relative humidity 80%). Additionally, the samples were subjected to thermal aging only in hot air at the temperatures of 80 • C and 120 • C. On the basis of the obtained results, the authors concluded that as a result of artificial weathering, the thermal conductivity of the TI structure with rectangular cells and a thickness of 10 cm decreased by 1.3%, while the solar radiation transmittance decreased by 12.9%. The changes in the transmittance were associated with an increase in light scattering and yellowing of the film. The authors found that, unlike artificial weathering, the action of hot air only does not have a significant effect on the transparency of the film. In the work [40], the TI structures from modified cellulose acetate were investigated in terms of their applicability in solar collectors. The tests were carried out in laboratory conditions, subjecting the samples to high temperatures and UV radiation. The authors found that after 450 h of aging at 140 • C, the optical performance of TI had decreased by approximately 3%. Additionally, it was shown that the combined effect of UV radiation and high temperature may cause yellowing and deterioration of the optical properties of TI even at temperatures below 100 • C (i.e., below the permissible operating temperature declared by the manufacturer). However, as there are no studies available to the authors of this study on the behavior of TI from cellulose acetate in natural conditions, it was assumed preliminary in the simulations that the performance parameters of the analyzed TI remain constant throughout the life cycle of the building.
Dust on the outer surfaces of TI may also be a factor influencing the SWs' efficiency in heat collecting. Although the authors did not reach the research on this problem regarding exactly TI, the literature on the subject contains articles on the impact of dust deposition on the efficiency of other solar installations [41,42]. It was shown in the work [41] that the predicted thermal efficiency of the collector with an inclination angle of up to 45 • decreases from 10.7% to 21.0% in the case of strong dust deposition, while the optical efficiency of the collector decreases by 8.39% compared to the case of a collector with a clean cover surface. However, since the amount of dust deposited decreases with the inclination angle of the surface [42], it can be expected that dust will have a smaller effect on the efficiency of the vertical TI than on the solar collector described above. Nevertheless, in order to maintain their original efficiency, TI surfaces should be cleaned regularly (before the start of the heating season and, if necessary, also during the season). It also seems reasonable to cover the outer surface of TI with a self-cleaning coating by the manufacturer.
The calculations were carried out for three variants of insulation thickness, namely l TI = 48, 88, and 128 mm. The coefficients of solar radiation energy permeability, τ TI , and heat transfer of the whole set, U TI , and the effective thermal conductivity of the cellulose acetate panel itself, λ, for the insulation of individual thickness values are presented in Table 1 (DQ stands for dimensionless quantity). The values of the effective thermal conductivity coefficient of the TI material were determined on the basis of the specified values U TI of the whole set, assuming that the λ coefficient for the glazing is 1.0 W/(m·K). It is assumed that there is a 2 cm thick non-ventilated air gap in the SW going inwards behind TI and before the AL and an absorber in the form of a black paint layer with a solar absorption coefficient of 0.94 on the AL surface from the side of air gap. The inner In order to protect a building against overheating, it is designed that the TI layer is equipped with rolling shutters which are lowered in spring and summer (from May to September) in all considered cases. In the case of the SW solution analyzed, the roller shutters should be mounted on the outer surface of the wall. They can be lowered manually or automatically controlled. Leaving the roller shutters raised in the summer, especially in areas with high insolation (e.g., in Rome), may result in exceeding the temperature of 140 • C for several hours in the insulation material, which, in a repeated situation, may lead to yellowing or even beginning of a melting process of TI [40], i.e., to lowering significantly its optical properties and thermal efficiency.
Climatic Data
The thermal performance of SWs equipped with TI is determined by climatic conditions in which they operate, as well as by a location and construction factors (wall orientation, shading, type of insulation, accumulating layer material, type of air gap). The climatic conditions are characterized in this issue by such values as solar irradiance, ambient air temperature, and speed of wind.
The solar irradiance, its distribution over time and its availability is essential to assess the effectiveness of SWs. In Europe, the average annual amount of solar radiation falling on the horizontal plane varies considerably depending on the location and usually ranges between 700-1800 kWh/m 2 . Annual insolation of less than 700 kWh/m 2 is found in regions such as northern Sweden and Finland or Scotland, while insolation of more than 1800 kWh/m 2 can be observed in southern Spain and Portugal [43]. Of course, the highest level of solar irradiance on the ground occurs statistically in June or July and the lowest Energies 2021, 14, 1283 7 of 55 one in December. In the period from October to March, only a small part of the total annual insolation is available, usually not exceeding 30% of the total value. This part increases as moving towards southern Europe. For example, it is about 16% for Stockholm, 21% for Warsaw, 25% for Paris, and 29% for Rome [44].
Within the study, the optimum TI thickness and AL material with optimum thermal diffusivity was sought for an SW oriented south and located in four optional European cities, namely Stockholm (Sweden), Warsaw (Poland), Paris (France), and Rome (Italy). In the cities under consideration, the annual horizontal plane insolation is respectively [44]: Stockholm-958 kWh/m 2 , Warsaw-1077 kWh/m 2 , Paris-1182 kWh/m 2 , and Rome-1652 kWh/m 2 . When selecting the places whose meteorological data were used in the calculations, efforts were made to ensure that these cities represent the different types of climates which can be found in Europe, and that the average annual insolation on the horizontal plane at these locations is within the typical European range of 700-1800 kWh/m 2 .
According to Köppen's classification, the climate in Stockholm and Warsaw is of the Dfb type, i.e., the continental climate with warm and wet summers and moderately cold winters. However, due to the fact that Stockholm is further north, temperatures in Stockholm are lower than in Warsaw. The climate in Paris belongs to the category of Cfb, which is a temperate oceanic climate with mild winters and warm, humid summers, while in Rome, we are dealing with the Csa-type climate, that is, a temperate Mediterranean climate, with mild winters and hot and dry summers. Figures 2-4 show a comparison of the monthly mean outdoor temperature, monthly insolation of the southoriented vertical plane, and the monthly mean speed of wind in the individual months of heating period (from October to April) for all analyzed cities respectively. on the horizontal plane varies considerably depending on the location and usually rang between 700-1800 kWh/m 2 . Annual insolation of less than 700 kWh/m 2 is found in regio such as northern Sweden and Finland or Scotland, while insolation of more than 18 kWh/m 2 can be observed in southern Spain and Portugal [43]. Of course, the highest lev of solar irradiance on the ground occurs statistically in June or July and the lowest one December. In the period from October to March, only a small part of the total annual solation is available, usually not exceeding 30% of the total value. This part increases moving towards southern Europe. For example, it is about 16% for Stockholm, 21% f Warsaw, 25% for Paris, and 29% for Rome [44].
Within the study, the optimum TI thickness and AL material with optimum therm diffusivity was sought for an SW oriented south and located in four optional Europe cities, namely Stockholm (Sweden), Warsaw (Poland), Paris (France), and Rome (Italy). the cities under consideration, the annual horizontal plane insolation is respectively [4 Stockholm-958 kWh/m 2 , Warsaw-1077 kWh/m 2 , Paris-1182 kWh/m 2 , and Rome 1652 kWh/m 2 . When selecting the places whose meteorological data were used in the c culations, efforts were made to ensure that these cities represent the different types climates which can be found in Europe, and that the average annual insolation on t horizontal plane at these locations is within the typical European range of 700-18 kWh/m 2 .
According to Köppen's classification, the climate in Stockholm and Warsaw is of t Dfb type, i.e., the continental climate with warm and wet summers and moderately co winters. However, due to the fact that Stockholm is further north, temperatures in Stoc holm are lower than in Warsaw. The climate in Paris belongs to the category of Cfb, whi is a temperate oceanic climate with mild winters and warm, humid summers, while Rome, we are dealing with the Csa-type climate, that is, a temperate Mediterranean c mate, with mild winters and hot and dry summers. Figures 2-4 show a comparison of t monthly mean outdoor temperature, monthly insolation of the south-oriented verti plane, and the monthly mean speed of wind in the individual months of heating peri (from October to April) for all analyzed cities respectively. In the paper [45], the Winter Climatic Severity Index (WCSI) and Summer Clima Severity Index (SCSI) were introduced to describe the nuisances of a given climate, spectively in winter and summer. Depending on the WSCI values, the authors have s gled out five different climate zones marked with the letters A, B, C, D, and E, with ea successive climate characterized by a more severe winter. The SSCI values were used the researchers to distinguish four climate zones 1, 2, 3, and 4, with different summer n sance, whereby the higher the number the zone has, the hotter the summer in the zon The values of the discussed indicators in the case of the individual cities analyzed in th work are respectively: Stockholm-WSCI = 2.91, SCSI = 0.01; Warsaw-WSCI = 2.38, SC = 0.26; Paris-WSCI = 1.39, SCSI = 0.31; Rome-WSCI = 0.51, SCSI = 1.37. This allows t above locations to be classified by the zones: Stockholm, Warsaw-E1, Paris-D1, Rome on the border of zones B3 and C4. In the paper [45], the Winter Climatic Severity Index (WCSI) and Summer Climat Severity Index (SCSI) were introduced to describe the nuisances of a given climate, re spectively in winter and summer. Depending on the WSCI values, the authors have sin gled out five different climate zones marked with the letters A, B, C, D, and E, with eac successive climate characterized by a more severe winter. The SSCI values were used b the researchers to distinguish four climate zones 1, 2, 3, and 4, with different summer nu sance, whereby the higher the number the zone has, the hotter the summer in the In the paper [45], the Winter Climatic Severity Index (WCSI) and Summer Climatic Severity Index (SCSI) were introduced to describe the nuisances of a given climate, respectively in winter and summer. Depending on the WSCI values, the authors have singled out five different climate zones marked with the letters A, B, C, D, and E, with each successive climate characterized by a more severe winter. The SSCI values were used by the researchers to distinguish four climate zones 1, 2, 3, and 4, with different summer nuisance, whereby the higher the number the zone has, the hotter the summer in the zone. The meteorological data necessary to carry out computer simulations have been downloaded for each of the locations under consideration from the European Commission's Photovoltaic Geographical Information System [44]. These data are a set of hourly parameters, characteristic for the climate of a given place and are called a typical meteorological year. The values used in the presented work are a sequence of hourly values: outside temperature, solar irradiance falling on a vertical plane oriented to the south, and speed of wind, for the whole heating period and for two months preceding the heating period (data for August and September were used to determine the initial temperature distributions in the walls at the beginning of October). Diagrams of this data are shown illustratively in Figures 5 and 6 for the temperature and solar irradiance courses respectively. It is worth noting at this point that a length of the heating season is usually different for the individual locations: Stockholm-from mid-September to mid-May [46]; Warsaw-from September to May; Paris-from October to April [47]; Rome-from November to mid-April [48]. However, in order to unify the calculations and increase the comparability of the results, the same average length of the heating season-i.e., from October to April-was adopted in this work for all the analyzed cities. The meteorological data necessary to carry out computer simulations have been downloaded for each of the locations under consideration from the European Commission's Photovoltaic Geographical Information System [44]. These data are a set of hourly parameters, characteristic for the climate of a given place and are called a typical meteorological year. The values used in the presented work are a sequence of hourly values: outside temperature, solar irradiance falling on a vertical plane oriented to the south, and speed of wind, for the whole heating period and for two months preceding the heating period (data for August and September were used to determine the initial temperature distributions in the walls at the beginning of October). Diagrams of this data are shown illustratively in Figures 5 and 6 for the temperature and solar irradiance courses respectively. It is worth noting at this point that a length of the heating season is usually different for the individual locations: Stockholm-from mid-September to mid-May [46]; Warsaw-from September to May; Paris-from October to April [47]; Rome-from November to mid-April [48]. However, in order to unify the calculations and increase the comparability of the results, the same average length of the heating season-i.e., from October to April-was adopted in this work for all the analyzed cities. Outdoor temperature during the heating period (on the basis of data from [44]). Figure 5. Outdoor temperature during the heating period (on the basis of data from [44]).
Energies 2021, 14, x FOR PEER REVIEW 10 of 55 Figure 6. Solar irradiance on a vertical south facing flat surface during the heating period (on the basis of data from [44]).
Governing Equations
Differential equations describing the non-stationary heat flow in the SW can be obtained using several different methods. The numerical model can be built on the basis of analogies to the electrical diagram [32,33] which treats the area of wall as a grid of points Figure 6. Solar irradiance on a vertical south facing flat surface during the heating period (on the basis of data from [44]).
Governing Equations
Differential equations describing the non-stationary heat flow in the SW can be obtained using several different methods. The numerical model can be built on the basis of analogies to the electrical diagram [32,33] which treats the area of wall as a grid of points with specific thermal capacities, connected to each other by segments with a given thermal resistance (i.e., as a set of resistors and capacitors). The temperature at a given point is the result of a balance of the heat fluxes flowing between adjacent nodes and the solar radiation flux, and it depends on the thermal capacity attributed to the node. A numerical SW model can also be obtained by transforming the heat equation by using finite differences and after introducing to it an additional source term related to the absorption of solar radiation by the absorber [34]. This method is particularly convenient when the nodal grid has a constant step, and when there are no changes in material coefficients between the adjacent nodes. Otherwise, it leads to finite difference equations of a more complex form [35].
A universal method of creating finite difference equations in the problems of nonstationary heat transport is the method of elementary balances which consists in making internal energy balances for individual finite difference elements where, in each of them, a nodal point is also distinguished (usually in their geometric center of gravity). The same indexes; i = 1, 2, . . . , n; are assigned to pairs: a given finite difference element-a node belonging to it, where n is the number of all elements. It is assumed that the sum of heat fluxes flowing to a given node from adjacent nodes, the external environment, and internal heat sources contribute to the change of internal energy of the finite difference element [35], i.e., where: Q ij -heat flux (W) flowing from the j-th node to the i-th node; Q S i -heat flux (W) flowing to the i-th node from the ambient (it occurs only for the nodes lying on the external boundaries of wall); q S i -average surface density of heat sources (W/m 2 ) in the i-th element; F i -surface area (m 2 ) in the i-th element on which the heat source occurs (F i q S i is equal to zero if the heat sources do not occur in the i-th element); V i -volume (m 3 ) of the i-th element; c p i -specific heat of the material (J/(kg·K)) in the i-th element; ρ i -material density (kg/m 3 ) in the i-th element; T i -temperature (K) at the i-th node; t-time (s). If there are different materials within the i-th element, then the product c p i ρ i is weighted average with regard to the volume fractions of these materials in the volume V i . The heat fluxes in Formula (1) are given as the dependencies where T i -temperature (K) at i-th node; T amb -temperature (K) in the SW ambient, R ijthermal resistance (K/W) of the material between the i-th and j-th nodes; λ ij -thermal conductivity coefficient (W/(m·K)) of the material between the i-th and j-th nodes; ∆x ijdistance (m) between the i-th and j-th nodes; F ij -average surface area (m 2 ) of heat flow perpendicular to the segment connecting the i-th and j-th nodes; R S i -heat transfer resistance ((m 2 ·K)/W) on the external surface of wall belonging to the i-th element; F S i -area (m 2 ) of the external surface of wall belonging to the i-th element.
In the issue under consideration, we can assume that we are dealing with a onedimensional heat flow, then F i = F ij = F S i = 1 m 2 , and V i = 1 m 2 ·∆x i , where ∆x i is the thickness of the i-th element. This thickness is related to the distances between adjacent nodes by the relationship Taking into account the above, Equation (1) can be written for the i-th node in the form In further considerations, the thermal diffusivity coefficient a T ij of the material between the i-th and j-th nodes is introduced and the time derivative occurring on the right side of Equation (5) is approximated with the right-hand difference quotient. Under these assumptions, the internal energy balance for the i-th element at the moment t k can be expressed as where the additional index k represents the quantities for the moment t k and k − 1 for the previous moment t k−1 . In the considered issue, the thermal diffusivities a T i−1,i and a T i, i+1 are equal to thermal diffusivity of material of a given layer in the case of nodes lying inside this layer, while in the case of nodes located at the boundary of layers, these diffusivities have a value resulting from the averaged heat capacity c p i ρ i assigned to the layers' boundary nodes and depending on the properties of adjacent layers' materials. The term q S i occurs for the node corresponding to the position of the absorber, and it will be equal to the source term q sol related to the solar irradiance.
where α sol abs -coefficient of absorption of solar radiation (DQ) by the absorber, τ TIcoefficient of total permeability (transmittance) (DQ) of solar radiation through TI, I solsolar irradiance (W/m 2 ) falling on the external wall surface. Generally, the term q S i appears for the nodes on the planes that limit the air gap, due to the radiant heat exchange between them. In this case, it is given by the expression [49] q where T gl , T a -temperatures (K) on the inner surfaces of the TI glass and the absorber respectively, C = 5.67 × 10 −8 (W/(m 2 ·K 4 ))-radiation constant of perfectly black body, ε ef -the equivalent emissivity (DQ) which, for two large parallel surfaces which are a short distance apart, can be calculated from the formula In the above equation, ε gl and ε a denote the surface emissivity (DQ) of the glass (equal to 0.836 [50]) and the absorber emissivity (taken as 0.94) respectively. The source term is equal to zero (q S i = 0) in other nodes what also means that the absorption of solar radiation inside TI including its glazing is considered as negligible. Based on the results presented in the work [34], another simplifying assumption was made that the convective heat transfer within the air gap is of low intensity. Such an assumption allowed for a simplified approach to the phenomenon of convection, consisting in increasing the thermal conductivity coefficient of air in proportion to the Nusselt number of gas [32,36]. Values of the Nusselt number for air filling the gap were determined analogously as in the study [34]. The third term on the right-hand side of the expression (7), describing the heat exchange with the ambient air, will occur only in the case of nodes situated on the external and internal surface of the SW. It was assumed in the paper that the heat transfer resistance on the external surface of the SW, R S ext , depends on the wind speed [51] where w is the wind speed, and R S ext is expressed in (m 2 ·K)/W, while the heat transfer resistance on the internal surface, R S int , is constant and equal to 0.13 (m 2 ·K)/W. At this point, it should be noted that although the assumption of one-dimensional heat flow is commonly used in the design of the wall layers' layout, it neglects the effect of two-dimensional flow in the vicinity of the border of SW and disturbs the 1D model behavior of the actual barrier. However, since TIs are usually found in buildings in combination with traditional insulations (covering the remaining wall outer surfaces), and the thermal conductivity coefficient of the analyzed TI is about two times higher than the conductivity coefficient of traditional insulation, the essential transverse heat flow from TI should not be expected towards the traditional insulation in the period when the SW accumulates heat. On the other hand, it is obvious that within the AL at the boundary of the SW, a noticeable heat flow will occur also in the direction parallel to the wall surface, which will apparently increase the thermal diffusivity of the AL and increase the heat flux reaching the room. Faster heat dissipation from the absorber surface will also result in lowering the temperature of the TI, i.e., reducing the risk of exceeding the permissible operating temperature. At night, however, heat from the traditionally insulated part of the building envelope may flow towards the SW and increase its losses. Of course, the share of the described phenomena in the entire heat balance of the SW will be the greater, the smaller the ratio of its surface to circumference. It is also important that the effects of these two described processes (during the day and night) will be partially canceled out. From the above considerations, it can be concluded that the solution of the SW proposed with the use of one-dimensional analysis in actual conditions will be characterized by a similar energy balance to the anticipated, and at the same time, it will be safer in terms of the possibility of exceeding the permissible operating temperature in the TI.
Verification of the Numerical Model
As a result of the discretization applied to the initial-boundary value problem, the resulting system of Equation (7) is an explicit scheme. This scheme allows to determine the temperature values T i,k at the nodal points of SW at any moment t k based on the temperature values T i,k−1 from the previous moment t k−1 and gives the possibility to change the length of the time step at individual moments of the simulation depending on the required accuracy of calculations. The advantage of this approach is the ease of writing a computer program solving this type of initial-boundary value problem. However, this algorithm is not unconditionally convergent and imposes certain limitations on the length of step ∆t = t k − t k−1 which guarantees the correctness of the results obtained.
One of such limitation results from the fact that the difference Equation (7) should be structured in such a way that an increase in each temperature from the previous moment T i,k−1 leads to an increase in the sought values of T i,k at the current moment. This condition is met if the coefficients in the Equation (7) at T i,k are positive [35]. Hence, with a constant length of the spatial step ∆x i = ∆x, we obtain the condition Energies 2021, 14, 1283 13 of 55 Condition (13) means that the limit value ∆t is to be taken as the smallest of the values determined for the individual nodes in the SW. In the analyzed issue, the greatest limitation on the length of the time step resulting from relation (13) always occurred in one of the nodes located in the air gap, i.e., in the layer with the highest thermal diffusivity values a T .
During the calculations, the length of the time step was first determined individually for each successive step k from the condition (13). This necessity resulted from the fact that in order to precisely determine the value of the Nusselt number, the parameters of air in the gap were assumed as temperature functions [34]. However, the analysis of the initial simulation results showed that satisfying the condition (13) did not guarantee the stability of the results obtained. It was caused by the presence in the Equation (7) of the source terms which had very high values during the intense solar irradiance. Ultimately, the procedure of selecting the length of the time step was based on meeting two criteria: firstly-dependence (13); secondly-the assumption that the temperature from step to step in any node cannot change by more than 0.1 • C. If, after performing calculations in a given step, it turned out that the second of the above limitations is not met, the calculations were repeated with the ∆t half shorter, and the second condition was checked again. The length of the time step thus selected guaranteed the convergence and stability of the solutions.
In order to prove the correctness of the results obtained, preliminary simulations were carried out and compared for three different spatial step lengths-i.e., ∆x i equal to 4 mm, 2 mm, and 1 mm and the corresponding time step values. At the same time with the shortening of ∆x i , the second condition concerning the length of the time step was also changed, i.e., it was assumed that the temperature from step to step in each node may change by no more than 0.05 • C and 0.025 • C for ∆x i = 2 mm and ∆x i = 1 mm respectively. These preliminary calculations were made for the climatic conditions of Rome, where the source terms took generally the highest values. They were carried out for the thickest analyzed transparent insulation (128 mm) and for all considered values of AL thermal diffusivity and thickness. It turned out that when changing the spatial step from 4 mm to 2 mm, the maximum observed relative changes (related to the values from the shorter step) were as follows: for the heat balance of the SW during the whole heating period-1.11 × 10 −6 , for the length of heating time-6.15 × 10 −5 , for the total time of TI overheating above the highest permissible temperature of 140 • C-5.84 × 10 −5 , and for the mean time lag of the maximum temperature on the absorber and on the internal surface of the wall-4.42 × 10 −4 . The corresponding relative changes for the spatial step change from 2 mm to 1 mm were respectively: 1.06 × 10 −6 , 1.50 × 10 −6 , 4.08 × 10 −5 , 1.10 × 10 −4 . The values of the relative changes between the calculation results obtained at different lengths of spatial and time steps indicate that the differential scheme used is convergent and that the solution converges at an accurate solution.
Since with the change in the length of the spatial step, the calculation time for one task (computer with 64 GB of RAM, 3.6 GHz processor) was significantly extended, i.e., from 850 s to 1230 s for ∆x i = 4 mm, from 3570 s to 5730 s for ∆x i = 2 mm, and from 1.41 × 10 4 s to 2.52 × 10 4 s for ∆x i = 1 mm, it was decided to perform all other calculations with a spatial step of 4 mm.
Due to the lack of access to experimental data on the considered SWs, the authors of the study decided to assess the correctness of the obtained results using the quasi-stationary method of calculating heat gains through opaque building envelope with TIs proposed in PN-EN ISO 13790: 2009 standard [31]. According to this method, the monthly solar gains via the SW per 1 m 2 of the wall are calculated from the formulas where Φ sol m -monthly solar heat gains (J/m 2 ), I sol m -monthly solar insolation of a plane with a given orientation (J/m 2 ), U-heat transfer coefficient of SW (W/(m 2 ·K)), R TI -heat transfer resistance of TI ((m 2 ·K)/W), R a -heat transfer resistance of air gap ((m 2 ·K)/W). By reducing the result of Equation (14) by the amount of heat losses through the SW in a given month (calculated as the product of U, the temperature difference between the internal and external environment, and period of time), we will obtain the heat balance for the wall in the considered month. Summing up the heat balances for the individual months of the heating period, the SW heat balance in the entire analyzed period is obtained. The discussed calculations were performed illustratively for two locations, characterized by extremely different climates, i.e., for Stockholm and Rome, for the ALs made of six different materials (CC, SCB, SLB, and OC with different densities and parameters listed in Section 3 of the article) and three different thicknesses: 10, 30, and 50 cm. Analyzing the obtained results, it was found that in the case of Stockholm, the seasonal thermal balances of SWs obtained using both compared methods differ on average by: 6.8%-the SW with 48 mm TI (the differences in the range from 3.5% to 11.9%), 4.3%-the SW with 88 mm TI (the differences ranging from 2.0% to 7.9%), and 2.6%-the SW with 128 mm TI (the differences ranging from 1.9% to 4.9%). Larger differences were always obtained in the case of thinner AL and lower thermal diffusivity of this layer. In the case of Rome, the differences between the seasonal thermal balances of SWs were as follows: 5.6%-the SW with 48 mm TI (the differences ranging from 3.8% to 9.0%), 4.6%-the SW with 88 mm TI (the differences between 3.2% and 7.1%), and 3.8%-the SW with 128 mm TI (the differences in the range from 2.8% to 5.7%). In all analyzed cases, the standard method gave a higher value of the seasonal heat balance than the numerical method. The same relationship could be observed in the majority of monthly balances. From the above, it can be concluded that due to its quasi-stationary approach, the standard method slightly overestimates solar thermal gains obtained by SWs with TI. Finally, on the basis of the conducted analyzes, it was found that both calculation methods did not show large discrepancies, and the numerical model proposed in the paper was considered verified.
Results
The program to simulate a behavior of SWs with TI was elaborated by the authors of this work in the MATLAB environment. During the calculations, a constant distance between the spatial grid nodes was assumed within all layers of SWs, i.e., ∆x i = ∆x = 4 mm. The spatial discretization adopted in this way ensured sufficiently good accuracy of the results (see point 2.4) and an acceptable duration of the simulations (from 850 s to 1230 s for one task depending on its input data). Thanks to the use of a 4 mm spatial step, it was also possible to model the thin TI protective glass layers as homogenous and separated from the core of TI. The length of time step changed during the calculations and was selected so that the convergence condition (13) was met, and that the temperature from step to step did not change by more than 0.1 • C at any spatial grid node. The individual selection of the time step at each moment of the simulation allowed the calculation to be significantly shortened.
The simulations were performed for the SWs with southern orientation (recommended for SWs) for four optional locations-Stockholm, Warsaw, Paris, and Rome-representing the different climatic conditions in Europe. A wall with three different TI thickness values (l TI =48, 88, and 128 mm) and with AL of variable thickness and thermal properties was analyzed. It was assumed that the thickness of the layer, l a , can take values from 0.1 m to 0.5 m (every 2 cm), while the thermal diffusivity is within the range from 4.32 × 10 −7 m 2 /s to 8.43 × 10 −7 m 2 /s (every twentieth of the analyzed range). The adopted range of thermal diffusivity variability corresponds to the values of diffusivity coefficients of the most frequently used construction materials (CC, SCB, SLB, and OC). The temperature inside the room is assumed to be constant and equal to 20 • C.
Due to the fact that there are the terms describing thermal capacity c p i ρ i of AL material for the boundary nodes of the accumulating layer in the Equation (7), it became necessary to assign the thermal diffusivities of this layer (from the analyzed range of variability of a T i ) to the thermal capacity values, i.e., to assume the relationship c p i ρ i (a T i ). For this purpose, thermal parameters of typical building materials were used, commonly used for erecting walls of buildings and having (according to the literature on the subject) the potential to be used as an AL in SWs [52]:
OC: ρ = 2400 kg/m 3 , c p = 840 J/(kg · K), λ = 1.7 W/(m·K) → a T = 8.43 × 10 −7 m 2 /s. The relationship c p i ρ i (a T i ) was adopted as in the form of a broken line, where the thermal diffusivities with the values as for the above-mentioned materials corresponded to the values of c p i ρ i for these materials, and between these points, the values of the thermal capacity c p i ρ i were determined by linear interpolation.
In the analyzed initially-boundary value problem, the temperature distribution in the wall at the beginning of the heating season is not known in advance. In order to get the right initial condition, the simulation for each of the considered SWs was started from 1 August at 00:00, assuming the initial temperature distribution as for the stationary distribution for the initial outdoor temperature and the lack of solar heat sources (the TI covered by the rolling shutters). This simulation was carried out until 30 September at midnight. The temperature distribution in the SW obtained for this moment was taken as the initial condition for the calculations for the heating season, when it was started to take into account the solar gains (the moment the insulation was exposed). From the preliminary simulation work carried out by the authors, it appeared that the adopted form of the initial condition affects the results of calculations corresponding to the first two weeks of the simulation. The authors of [34] reached the same conclusion. It can therefore be assumed that the initial condition set in this way is correct.
Finally, in case of each SW's configuration and location, thermal calculations resulted in: • the heat balance per the unit area of SW in the heating period, • the heating time during which the SW acts as a source of heat in the room in the heating period, • the longest time when the temperature in the TI rises above 140 • C (the longest time of TI overheating) in the heating period, • the time lag of the maximum temperature between the absorber and SW's internal surface in the daily cycle during the heating period.
The SW's heat balance for the period under consideration was calculated as the time integral of the heat flux on the unit area of SW's internal surface, assuming the flux flowing inward as positive. The heating time, when the SW constitutes the source of heat in the room, was determined as the sum of periods in which the heat flux on the internal surface flowed inward (i.e., the wall surface temperature was higher than the assumed internal air temperature of 20 • C). The maximum temperature time lag was calculated as the mean time difference between the occurrence of the maximum temperature on the absorber and the wall's internal surface for each day in the heating season.
The obtained results allowed the authors of the work to make contour graphs showing the dependence of the seasonal heat balance of the SW on the AL thickness and its thermal diffusivity for each considered TI thickness and SW's location. Analogous contour graphs were made for the heating time, the longest overheating time above 140 • C in the TI, and the time of the maximum temperature time lag. Examples of such diagrams concerning the SW with 128 mm thick TI and located in Warsaw are shown in Figure 7. In addition, the graphs show the vertical lines corresponding to the thermal diffusivity of typical construction materials with the aforementioned parameters. The diagrams, relating to the cases of TI with other thickness values and locations can be found in Appendix A.
Discussion
The diagrams in Section 3 and Appendix A show that a proper selection of AL and TI characteristics is closely related to SWs' operating parameters we want to obtain, and it depends strongly on the climatic conditions which they are located in. Some limitation in the selection of AL material is also due to the temperature resistance of the TI (see
Discussion
The diagrams in Section 3 and Appendix A show that a proper selection of AL and TI characteristics is closely related to SWs' operating parameters we want to obtain, and it depends strongly on the climatic conditions which they are located in. Some limitation in the selection of AL material is also due to the temperature resistance of the TI (see Figures A1d, A2d, A3d, A4d, A5d, A6d, A7d, A8d, A9d, A10d, A11d and A12d), i.e., the acceptable thermal conditions under which the insulation retains its required properties.
On the basis of the diagrams shown in Figures A1a, A2a, A3a, A4a, A5a, A6a, A7a, A8a, A9a, A10a, A11a and A12a we can state that the SWs heat balance obtained during the heating season increases with the increase of thermal diffusivity of the AL and decreases with the increase of this layer thickness. Similar conclusions were reached by the authors of the work [24], in which it was stated that in the case of an unventilated Trombe wall, the heat gains decrease with the increase in wall thickness. In turn, [26] shows that the increase in thermal diffusivity of the AL reduces the primary energy demand of a building with a glazed Trombe wall, which is tantamount to an improvement in the seasonal heat balance of such a wall. Different conclusions were reached by the authors of the article [27] who stated that the heat demand for heating of a building with a Trombe wall decreases with the decrease of AL thermal diffusivity (less demand was obtained using SCB than concrete, and the lowest demand was obtained for the AL made of CC). The obtained results were explained by the fact that the effect of limiting heat loss by a material with lower density (and lower thermal conductivity coefficient) outweighs the effect of reducing solar heat gains due to higher thermal resistance. The different conclusions presented in [27] may result from the specificity of the climate in which the analyzed building was located (Ancona according to Köppen's classification belongs to the climate category Cfa-humid subtropical climate), and they cannot be uncritically extended to SWs located in regions with different meteorological data. Both in the study [26,27], it was found that increasing the thickness of the AL improves the energy efficiency of Trombe walls (the primary energy demand and the heat demand for heating the building were reduced respectively), which was explained by the increase in the thermal resistance of the system. Since both of these articles analyzed the traditional Trombe glazed walls, this conclusion shows the different behavior of SWs equipped with a glass pane or TI.
When designing an SW, apart from its thermal balance, other parameters must also be taken into account. The time, during which it acts as a heat source in a given climate, is a very important parameter. As shown in Figures A1b, A2b, A3b, A4b, A5b, A6b, A7b, A8b, A9b, A10b, A11b and A12b, this heating time for a T ∈ 4.32 × 10 −7 , 4.86 × 10 −7 m 2 /s increases slightly as the AL thermal diffusivity increases, and it remains approximately constant for a T ∈ 4.86 × 10 −7 , 8.43 × 10 −7 m 2 /s. On the other hand, it lengthens significantly with increasing the layer thickness. Therefore, in this case, we observe the opposite upward trend than in the case of variability of the thermal balance of the wall-i.e., by increasing the thickness-the thermal balance of the SW is deteriorated, while the time of heating the room by the SW is extended.
As mentioned earlier, from the point of view of the heat balance, it would be most preferably to make a thin AL from a material with high thermal diffusivity. However, then the heat would reach the room so quickly that practically, we would have direct gains, and regardless of the considered climate, the time of the maximum temperature lag, when transferring heat through the wall, would be less than one hour ( Figures A1c, A2c, A3c, A4c, A5c, A6c, A7c, A8c, A9c, A10c, A11c and A12c). As it is known, the direct gains cause large temperature fluctuations in rooms, and usually occur at this time of day when they are not most desirable. The idea behind the SWs is to shift the solar heat gains in buildings to the afternoon and evening hours and to spread them over time. According to the authors, the associated minimum time lag of maximum temperature between the absorber and the wall's internal surface should be between 4 and 5 h depending on the location, which corresponds to the occurrence of maximum solar heat gains in the rooms after 5 to 6 h from the moment the sun passes through the zenith (the maximum temperature on the absorber on a cloudless sunny day usually occurs around one hour after solar noon). For this reason, the authors proposes that the AL optimum thickness should varies between 25 cm and 35 cm depending on the material used. This minimum time of the temperature wave lag was proposed in such a way that at the turn of January and February, the maximum heat gains in the room occur one hour after sunset, hence the time for Stockholm is about 4 h, In the case of the analyzed TI, an important criterion for the selection of AL parameters is to avoid the possibility of overheating the insulation above 140 • C, which is its short-term thermal resistance. This condition imposes particularly a certain limitation on the lower range of possible AL construction materials thermal diffusivity, while the lower acceptable thermal diffusivity values, in the case of which overheating the insulation has not yet occurred, are slightly larger for thicker insulations (Figures A1d, A2d, A3d, A4d, A5d, A6d, A7d, A8d, A9d, A10d, A11d and A12d). It is worth paying attention to the fact that in the case of all analyzed locations and all thicknesses of TI, the use of CC as AL would cause the TI to overheat. It is also interesting to note that the range of useless thermal diffusivity and AL thickness values is similar in Stockholm and in Rome, even though Stockholm's climate is colder than that of Rome. This is due to the specific insolation conditions in Stockholm ( Figure 3) where there is a very high intensity of solar irradiance on vertical surfaces with southern orientation in the months of March and April.
As it results from the above considerations, the proper selection of AL parameters and TI thickness is a complicated task and depends to a large extent on the specific climate in which the SW will be used. In order to facilitate this process, the authors of this study prepared graphs (Figures 8-11) of the variability of the room heating time in function of the SW's heat balance depending on the selected most important AL features. On these nomograms, the thick black lines correspond to different AL thickness values (l a = 12, 25, 38, and 50 cm), the thin blue lines correspond to the individual materials specified in Section 3, the grey thick lines correspond to different maximum temperature time lags (3, 5, and 7 h), while on the left side of the red dashed line there are solutions that are not acceptable due to overheating the TI. In the case of the analyzed TI, an important criterion for the selection of AL parameters is to avoid the possibility of overheating the insulation above 140 °C, which is its short-term thermal resistance. This condition imposes particularly a certain limitation on the lower range of possible AL construction materials thermal diffusivity, while the lower acceptable thermal diffusivity values, in the case of which overheating the insulation has not yet occurred, are slightly larger for thicker insulations (Figures A1d-A12d). It is worth paying attention to the fact that in the case of all analyzed locations and all thicknesses of TI, the use of CC as AL would cause the TI to overheat. It is also interesting to note that the range of useless thermal diffusivity and AL thickness values is similar in Stockholm and in Rome, even though Stockholm's climate is colder than that of Rome. This is due to the specific insolation conditions in Stockholm ( Figure 3) where there is a very high intensity of solar irradiance on vertical surfaces with southern orientation in the months of March and April.
As it results from the above considerations, the proper selection of AL parameters and TI thickness is a complicated task and depends to a large extent on the specific climate in which the SW will be used. In order to facilitate this process, the authors of this study prepared graphs (Figures 8-11 On the basis of the above nomograms and based on the previous considerations, the authors proposed the following SW structural solutions for the individual locations: (48,88, and 128 mm), the following principles were followed: 1. Adopting the lightest possible AL material for which the criterion of not exceeding 140 °C in the TI is met; 2. Adopting the smallest possible AL thickness for which the assumed maximum temperature time lag is met.
For this reason, the material thickness values adopted are theoretical ones which may not correspond to the actual dimensions of the masonry elements available on the market in a given region. The estimated values of AL parameters were read on the basis of the nomograms presented in Figures 8-11, and then the thickness of the AL was specified using Figures A1c-A12c. When selecting the AL parameters, the following principle was also followed: not to use the solutions that are too close to the area on the nomograms where there is a risk of excessive temperature increase in the TI (i.e., that are too close to the red dashed line). In the case of Warsaw, the above rules were slightly changed, and it was decided (due to relatively low temperatures in Warsaw in January) to use SLBs in order to increase the heat gains in the building. Of course, the designer, depending on the technology used for erecting a building, the preferred time lag of temperature wave in a When selecting the AL parameters for three different TI thickness values (48, 88, and 128 mm), the following principles were followed:
1.
Adopting the lightest possible AL material for which the criterion of not exceeding 140 • C in the TI is met; 2.
Adopting the smallest possible AL thickness for which the assumed maximum temperature time lag is met.
For this reason, the material thickness values adopted are theoretical ones which may not correspond to the actual dimensions of the masonry elements available on the market in a given region. The estimated values of AL parameters were read on the basis of the nomograms presented in Figures 8-11, and then the thickness of the AL was specified using Figures A1c, A2c, A3c, A4c, A5c, A6c, A7c, A8c, A9c, A10c, A11c and A12c. When selecting the AL parameters, the following principle was also followed: not to use the solutions that are too close to the area on the nomograms where there is a risk of excessive temperature increase in the TI (i.e., that are too close to the red dashed line). In the case of Warsaw, the above rules were slightly changed, and it was decided (due to relatively low temperatures in Warsaw in January) to use SLBs in order to increase the heat gains in the building. Of course, the designer, depending on the technology used for erecting a building, the preferred time lag of temperature wave in a SW, and the required heat balance of a SW, may propose other wall material and geometric solutions for the individual locations based on the diagrams presented in this work.
A comparison of monthly heat balances and heating times for the SWs are presented respectively in Figures 12 and 13 in the case of AL parameters and materials proposed for the considered locations. As in the case of SW with TI, there is a risk of excessively high temperatures on the internal surface of the wall, and Figure 14 presents additionally the minimum, mean, and maximum temperature of the SW internal surface in the individual months of heating period for all considered locations. On the other hand, Figure 15 presents a verifying comparison of monthly balances for Stockholm and Rome, calculated numerically and using the quasi-stationary standard method [31].
using Figures A1c-A12c. When selecting the AL parameters, the following principle also followed: not to use the solutions that are too close to the area on the nomogr where there is a risk of excessive temperature increase in the TI (i.e., that are too clos the red dashed line). In the case of Warsaw, the above rules were slightly changed, an was decided (due to relatively low temperatures in Warsaw in January) to use SLB order to increase the heat gains in the building. Of course, the designer, depending on technology used for erecting a building, the preferred time lag of temperature wave SW, and the required heat balance of a SW, may propose other wall material and geom ric solutions for the individual locations based on the diagrams presented in this wor A comparison of monthly heat balances and heating times for the SWs are presen respectively in Figures 12 and 13 in the case of AL parameters and materials proposed the considered locations. As in the case of SW with TI, there is a risk of excessively h temperatures on the internal surface of the wall, and Figure 14 presents additionally minimum, mean, and maximum temperature of the SW internal surface in the individ months of heating period for all considered locations. On the other hand, Figure 15 sents a verifying comparison of monthly balances for Stockholm and Rome, calcula numerically and using the quasi-stationary standard method [31]. Energies 2021, 14, x FOR PEER REVIEW 28 of 55 Figure 15. Comparison of monthly heat balances of the SWs located in Stockholm and Rome, calculated numerically and using the standard method according to [31] in the case of locations in Stockholm and Rome.
As shown in Figure 12, the SWs with proper AL parameters and TI thickness show a positive heat balance in almost all months of the heating period regardless of their location. The only exception to this is the wall located in Stockholm during December and January. However, regardless of the fact that the heat balance of the SW operating in Stockholm during these months is negative, there are also periods when it constitutes a heat source in the room as shown in Figure 13 (in December the total heating time is about 4 days, and in January-over 10 days). Figure 13 also shows that the SW located in Rome will heat up the room for practically the entire heating season (211 days out of 212 days of the heating season). In the case of this wall, in the seasonal climate transition periods, it may be advisable to temporarily lower the TI rolling shutters due to excessive heat gains, especially on days with high insolation in October and April. Also for the SW located in Paris, the time during which it acts as a heat source is quite long (93% of the heating period), but the heat gains obtained by this wall are usually much lower than in the case of the SW located in Rome. The heating time of the SW located in Warsaw is comparable to Figure 15. Comparison of monthly heat balances of the SWs located in Stockholm and Rome, calculated numerically and using the standard method according to [31] in the case of locations in Stockholm and Rome.
As shown in Figure 12, the SWs with proper AL parameters and TI thickness show a positive heat balance in almost all months of the heating period regardless of their location. The only exception to this is the wall located in Stockholm during December and January. However, regardless of the fact that the heat balance of the SW operating in Stockholm during these months is negative, there are also periods when it constitutes a heat source in the room as shown in Figure 13 (in December the total heating time is about 4 days, and in January-over 10 days). Figure 13 also shows that the SW located in Rome will heat up the room for practically the entire heating season (211 days out of 212 days of the heating season). In the case of this wall, in the seasonal climate transition periods, it may be advisable to temporarily lower the TI rolling shutters due to excessive heat gains, especially on days with high insolation in October and April. Also for the SW located in Paris, the time during which it acts as a heat source is quite long (93% of the heating period), but the heat gains obtained by this wall are usually much lower than in the case of the SW located in Rome. The heating time of the SW located in Warsaw is comparable to the heating time of the SW operating in Paris (89% of the heating season); however, heat gains obtained by this wall are about 22% lower than in the case of the SW in Paris. The SW located in Stockholm has the shortest heating time (64% of the heating period), but its heat balance is slightly higher than that of the SW in Warsaw. It can be concluded that SWs in buildings located in the north of Europe will perform their function well, especially during the seasonal climate transition periods (autumn, spring). SWs with TI located in the central regions of Europe will also heat rooms in winter, but their heat gains will not be too high. In contrast, SWs located in southern Europe can perform their function for the entire heating period.
Based on Figure 14, Since the approximate perceived temperature can be calculated as the arithmetic mean of the air temperature in the room (20 • C) and the radiant temperature of wall, it can be concluded that in the vicinity of the designed SW, the thermal conditions will be close to the conditions of thermal comfort which range from 20 • C to 25 • C in the heating period, while it ranges from 23 • C to 26 • C in summer [53]. Although the temperature on the internal surface of the walls may temporarily increase in the analyzed cases up to 41.3 • C (the SW located in Stockholm in March), it is still the one not exceeding the temperature range acceptable for use in water wall radiators (from 35 • C to 45 • C).
Conclusions
The paper presents a numerical model of an SW with TI based on differential equations of the problem formulated on the basis of elementary balances. Using the adopted model, the behavior of SWs was simulated for different climatic conditions in Europe, represented by cities such as Stockholm, Warsaw, Paris, and Rome. For each location, the calculations were carried out for different AL parameters: thermal diffusivity varying from 4.32 × 10 −7 m 2 /s to 8.43 × 10 −7 m 2 /s (every 20th of the analyzed range) and thickness varying from 0.1 m to 0.5 m (every 2 cm), and for three different TI thickness values (48,88, and 128 mm). The SW contains thermal insulation made of modified cellulose acetate in honeycomb form (TIMax CA). The results of the calculations allowed the authors of the article to draw the following conclusions: 1.
The heat gains of the SW obtained during the heating season increase as the heat diffusivity of the AL increases and decrease as the thickness of this layer increases.
2.
The time, when the SW acts as a heat source in a room, depends strongly on the thickness of the AL and increases with this thickness. On the other hand, thermal diffusivity has no significant influence on the length of heating time for the considered building materials. For a T ∈ 4.32 × 10 −7 , 4.86 × 10 −7 m 2 /s, this time increases slightly with increasing AL thermal diffusivity, and it remains approximately constant for a T ∈ 4.86 × 10 −7 , 8.43 × 10 −7 m 2 /s.
3.
The time lag of the maximum temperature on the absorber and the internal surface of the SW increases in general as the thickness of the AL increases in its analyzed range. On the other hand, it slightly increases with the increase of AL thermal diffusivity for a T ∈ 4.32 × 10 −7 , 4.86 × 10 −7 m 2 /s, and then it starts slightly decreasing with a T . The dependence of the maximum temperature time lag on the diffusivity and thickness of the AL is very similar for all analyzed locations and TI thickness values, and as might be expected, the influence of the analyzed climate conditions is of secondary importance for this SW characteristic.
4.
The values of the optimal AL parameters change with the meteorological conditions of the given region, with the insolation on the wall surface being the decisive factor.
5.
The decisive factors, which have the greatest influence on the selection of the AL parameters, apart from the climatic conditions, are the desired time lag of temperature wave and the possibility of exceeding the permissible operating temperature in the TI material. 6.
Under the analyzed conditions, the proposed AL thickness values are in the range from 25 cm to 29 cm, while the thermal diffusivity values of the AL materials range from 4.86 × 10 −7 m 2 /s (SCB) to 5.38 × 10 −7 m 2 /s (SLB), whereas in warmer climates, materials with lower thermal diffusivity can be used. CC is not proposed by the authors to construct the AL in any climate due to the danger of exceeding the temperature resistance (140 • C) of the TI. 7.
In the case of the Dfb continental climates (Stockholm, Warsaw) with relatively low insolation (less than 1100 kWh/m 2 ), a 128 mm TI thickness becomes necessary to obtain a higher heat balance of the SW. In the case of the temperate oceanic climate Cfb (Paris), 88 mm thick insulation is sufficient, while in the temperate Mediterranean climate Csa (Rome), 48 mm thick insulation is sufficient. Funding: This research received no external funding.
Data Availability Statement:
For the calculations, the authors used as some of the inputs publicly available climatic and material data referenced in [37,38,44].
Conflicts of Interest:
The authors declare no conflict of interest.
Appendix A
thickness values, and as might be expected, the influence of the analyzed climate conditions is of secondary importance for this SW characteristic. 4. The values of the optimal AL parameters change with the meteorological conditions of the given region, with the insolation on the wall surface being the decisive factor. 5. The decisive factors, which have the greatest influence on the selection of the AL parameters, apart from the climatic conditions, are the desired time lag of temperature wave and the possibility of exceeding the permissible operating temperature in the TI material. 6. Under the analyzed conditions, the proposed AL thickness values are in the range from 25 cm to 29 cm, while the thermal diffusivity values of the AL materials range from 4.86 × 10 −7 m 2 /s (SCB) to 5.38 × 10 −7 m 2 /s (SLB), whereas in warmer climates, materials with lower thermal diffusivity can be used. CC is not proposed by the authors to construct the AL in any climate due to the danger of exceeding the temperature resistance (140 °C) of the TI. 7. In the case of the Dfb continental climates (Stockholm, Warsaw) with relatively low insolation (less than 1100 kWh/m 2 ), a 128 mm TI thickness becomes necessary to obtain a higher heat balance of the SW. In the case of the temperate oceanic climate Cfb (Paris), 88 mm thick insulation is sufficient, while in the temperate Mediterranean climate Csa (Rome), 48 mm thick insulation is sufficient. Funding: This research received no external funding.
Data Availability Statement:
For the calculations, the authors used as some of the inputs publicly available climatic and material data referenced in [37,38] and [44].
Conflicts of Interest:
The authors declare no conflict of interest. | 2021-05-08T00:02:45.886Z | 2021-02-26T00:00:00.000 | {
"year": 2021,
"sha1": "793213082e14496d4962d3887786d4fdfc942841",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1996-1073/14/5/1283/pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "9acc74a6ce7704fa1dbba1858e367b377bba702f",
"s2fieldsofstudy": [
"Environmental Science",
"Engineering"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
46127337 | pes2o/s2orc | v3-fos-license | The H subunit (Vma13p) of the yeast V-ATPase inhibits the ATPase activity of cytosolic V1 complexes.
V-ATPases are composed of a peripheral complex containing the ATP-binding sites, the V(1) sector, attached to a membrane complex containing the proton pore, the V(o) sector. In vivo, free, inactive V(1) and V(o) sectors exist in dynamic equilibrium with fully assembled, active V(1) V(o) complexes, and this equilibrium can be perturbed by changes in carbon source. Free V(1) complexes were isolated from the cytosol of wild-type yeast cells and mutant strains lacking V(o) subunit c (Vma3p) or V(1) subunit H (Vma13p). V(1) complexes from wild-type or vma3Delta mutant cells were very similar, and contained all previously identified yeast V(1) subunits except subunit C (Vma5p). These V(1) complexes hydrolyzed CaATP but not MgATP, and CaATP hydrolysis rapidly decelerated with time. V(1) complexes from vma13Delta cells contained all V(1) subunits except C and H, and had markedly different catalytic properties. The initial rate of CaATP hydrolysis was maintained for much longer. The complexes also hydrolyzed MgATP, but showed a rapid deceleration in hydrolysis. These results indicate that the H subunit plays an important role in silencing unproductive ATP hydrolysis by cytosolic V(1) complexes, but suggest that other mechanisms, such as product inhibition, may also play a role in silencing in vivo.
V-ATPases 1 are highly conserved proton pumps distributed throughout the vacuolar network in all eukaryotic cells. V-ATPases maintain organelle acidification and affect cytosolic pH and ion balance, and their activity has been linked to a diverse array of cellular processes ranging from zymogen activation to protein sorting to viral membrane fusion events (1,2). V-ATPases are comprised of two structural domains, the V 1 domain, which consists of a complex of peripheral subunits containing the nucleotide-binding sites attached to the cytoplasmic face of membrane, and the V o domain, which is comprised of several integral membrane and tightly associated peripheral proteins that contain the proton pore (1,2). The yeast V-ATPase has at least eight V 1 subunits (designated A, B, C, D, E, F, G, and H) and five V o subunits (designated a, c, cЈ, cЉ, and d) (1,3,4). Genetic and biochemical approaches have converged to show that all of the subunits are required for function of V 1 V o complexes (3,4). Other eukaryotic V-ATPases have very similar subunit compositions.
Functional interdependence of V 1 and V o has been clearly established. Only fully assembled V 1 V o complexes can couple ATP hydrolysis to H ϩ translocation, and in vitro experiments indicate that upon V 1 dissociation, the V o domain does not conduct protons and the V 1 domain does not perform MgATP hydrolysis (5)(6)(7)(8). Nevertheless, many different cells have been shown to contain free V 1 and free V o sectors in addition to fully assembled V 1 V o complexes (9 -12). Independent experiments in yeast and Manduca sexta have indicated that the disassembled V 1 and V o sectors exist in a dynamic equilibrium with fully assembled complexes and that this equilibrium can be shifted in response to changes in extracellular conditions (10,13,14). Starvation appears to stimulate disassembly of V 1 from V o , but this disassembly is fully reversible upon refeeding (13)(14)(15). This reversible association between V 1 and V o is believed to regulate V-ATPase function in vivo: disassembly of V-ATPase complexes may conserve ATP when energy reserves are low and reassembly of the enzyme may provide the renewed proton pumping capacity necessary to prevent cytosolic acidification when active metabolism resumes (16,17).
A constitutively active free V 1 in the cytosol could quickly become lethal to the cell by hydrolyzing cytosolic reserves of ATP. Graf et al. (14) have isolated cytosolic V 1 complexes from M. sexta and shown that these complexes exhibit Ca 2ϩ -dependent ATP hydrolysis at nonphysiological Ca 2ϩ concentrations but hydrolyze MgATP only in the presence of methanol. The properties of V 1 complexes have also been examined by reconstitution of expressed subunits and biochemically isolated subcomplexes of the bovine clathrin-coated vesicle ATPase (18 -21). These studies have also revealed a shift from Mg 2ϩdependent to Ca 2ϩ -dependent ATPase activity in V 1 complexes detached from the membrane subunits and have suggested that CaATPase activity is a partial reaction characteristic of dissociated V 1 sectors that is functionally related to the MgATPase activity of the fully assembled proton pump (18).
In an attempt to gain more insight into the cellular mechanisms of V 1 -ATPase silencing, we have purified and characterized native yeast cytosolic V 1 complexes. Cytosolic V 1 were isolated from wild-type cells and from two vma mutant strains. We found that V 1 subunit C was not present in any of the isolated complexes. All of the isolated V 1 complexes hydrolyzed ATP in the presence Ca 2ϩ , but only V 1 complexes lacking subunit H had MgATPase activity. The current study also indicates that product inhibition of ATPase activity may occur in cytosolic V 1 complexes but cannot fully account for the inactivation of these complexes. In addition, structural changes within the V 1 complex itself, such as lost of an activator subunit (C subunit) and presence of at least one inhibitory subunit (H subunit) may be critical for silencing the MgATPase activity in vivo.
EXPERIMENTAL PROCEDURES
Materials and Strains-Zymolyase 100T was purchased from ICN. Concanamycin A was obtained from Wako Biochemicals. Prestained molecular mass markers (high range) were obtained from Life Technologies. ATP bioluminescence assay kit HS II and anti-Myc monoclonal antibody 9E10 were purchased from Roche Molecular Biochemicals. All other reagents were purchased from Sigma.
The wild-type yeast strain used in these experiments was SF838-1D (MAT␣ ade6 leu2-3,112 ura3-52 pep4-3 gal2 (22)). The vma3⌬ strain was congenic with the wild-type strain except for the vma3⌬::URA3 mutation (23). A congenic vma13⌬ strain was constructed by excising a BamHI-SacII fragment containing the vma13⌬::LEU2 allele from the deletion plasmid described below, and integrating into the VMA13 locus by a one-step gene disruption (24). Replacement of the wild-type allele by the deletion allele was confirmed by PCR of chromosomal DNA prepared from yeast cells.
VMA13 Plasmid Constructions-VMA13 in pRS315 was a gift from R. Hirata. VMA13 was tagged with the Myc epitope immediately following the methionine start codon by employing PCR and subcloning techniques. Two separate PCR reactions were performed using VMA13 as a template. Reaction A utilized primer 1: 5Ј-AGAAATAAGCTTTGT-TCCATTGTTCCTGAAATCGC, and primer 2: 5Ј-GACGAAGGAATTT-GAAAGAG, this reaction generated a 564-base product that encompassed 546 5Ј-untranslated region bases and 18 5Ј bases of the Myc epitope including the HindIII site that is found within the Myc sequence. Reaction B utilized primer 3: 5Ј-CAAAGCTTATTTCTGAAA-GACTTGGGAGCACGAAGATATT and primer 4: 5Ј-GATCACGCATA-ACC generating a 1912-base product which contained 27 bases of the Myc epitope including the HindIII site, 1433 bases of VMA13 open reading frame, and 452 bases of 3Ј-untranslated region. PCR products were ligated into pCR2.1 TM (Invitrogen). Orientation of products was determined based on restriction digests. Reaction A was subcloned into pRS316 using BamHI and HindIII sites, Reaction B product was subcloned into the newly constructed pRS316 vector containing the Reaction A product using KpnI and HindIII sites. Sequencing confirmed the presence of the Myc epitope within wild-type VMA13. N-Myc VMA13 is able to complement the growth phenotypes of a vma13⌬ strain, and vacuolar vesicles isolated from this strain possess 80% wild type ATPase activity. 2 VMA13 was cloned into the yeast shuttle vector pRS316 at the BamHI and NotI sites. A deletion plasmid was constructed by replacing the 1084-base pair BglII fragment within the ORF of VMA13 with a 2.2-kilobase Leu2 fragment.
Purification of Cytosolic V 1 Complexes-Cells were grown overnight to mid-log phase (3 A 600 /ml) in YEPD (1% yeast extract, 2% peptone, 2% glucose) medium adjusted to pH 5. 6000 A 600 units of cells (approximately 6 ϫ 10 10 cells) were harvested by centrifugation at 2500 ϫ g for 10 min and resuspended in 300 ml of 0.05 M Tris-HCl, pH 9.4, containing 10 mM dithiothreitol. Cells were rocked for 5 min at 30°C, pelleted by centrifugation for 5 min at 2200 ϫ g, and the pellet resuspended in 300 ml of 0.05 M Tris-HCl, pH 7.5, 1.2 M sorbitol, 2% glucose. Cells were converted to spheroplasts by adding 1500 units of zymolase 100T to the suspension and gently shaking at 30°C for 20 min. Spheroplasts were washed twice with 300 ml of YEPD medium containing 1.2 M sorbitol. In certain experiments, spheroplasts were briefly deprived of glucose by incubating 5 min at 30°C in 200 ml of YEP (1% yeast extract, 2% peptone) plus 1.2 M sorbitol. Otherwise, incubation was performed in 200 ml of YEPD plus 1.2 M sorbitol. Finally, spheroplasts were collected by centrifugation and lysed on ice in 15 ml of buffer A (0.05 M Tris-HCl, pH 7.5, 30 mM NaCl, 30 mM KCl, 0.3 mM EDTA) containing a protease inhibitor mixture (1 mM phenylmethylsulfonyl fluoride, 1 g/ml pepstatin, 5 g/ml aprotinin A, 1 g/ml leupeptin) in a Dounce homogenizer. Homogenate was centrifuged at 275,000 ϫ g for 1.25 h in a Ti-75 rotor and the supernatant (15-20 mg of protein/ml) precipitated with 50% ammonium sulfate. Precipitation was performed by dropwise addition of a cold saturated solution of ammonium sulfate, pH 7, in three steps of 0 -20, 20 -35, and 35-50% (v/v) with 15 min incubation between additions and constant stirring on ice. After the final addition, the mixture was incubated on ice for 30 min and the protein pelleted at 9,000 ϫ g for 11 min. The precipitated protein was resuspended in buffer A and desalted on a Centricon Plus-20 filter (100,000 dalton cutoff; Amicon), then filtered and applied to a Mono-Q2 column (Bio-Rad) equilibrated in buffer A containing 9.6 mM -mercaptoethanol. The column was washed with 10 ml of equilibration buffer and the bound protein eluted with three sequential linear gradients: 5 ml of 0 -30% buffer B (0.05 M Tris-HCl, pH 7.5, 0.2 M NaCl, 0.2 M KCl, 0.3 mM EDTA, 9.6 mM -mercaptoethanol) followed by a 20-ml isocratic flow of 70% buffer A, 30% buffer B, 6 ml of 30 -40% buffer B followed by a 20-ml isocratic flow in 60% buffer A, 40% buffer B, and 5 ml of 40 -100% buffer B followed by a 5-ml isocratic flow in 100% buffer B. 1-ml fractions were collected. Fractions were analyzed for the presence of V 1 subunits by Western blotting, and fractions containing V 1 subunits were immunoprecipitated under nondenaturing conditions with monoclonal antibodies 8B1 or 13D11 (against the 69-and 60-kDa V 1 subunits, respectively) to identify those containing V 1 complexes (11). Fractions containing V 1 complexes (fractions 43-54) eluted at 40% buffer B (0.1 M NaCl, 0.1 M KCl) and were pooled and concentrated on a Centricon Plus-20 filter (100,000 dalton cut-off). Pooled Q-2 fractions were applied to a Bio-Rad Sec 400 gel filtration column equilibrated with 30% buffer A, 70% buffer B. Purified V 1 complexes (0.05-0.5 mg) were collected in a single fraction. Chromatography was performed on a Bio-Rad BioLogic system.
Enzyme Assays-Hydrolysis of ATP quantitated colorimetrically as the phosphate released based on the Taussky and Schorr method (25). Briefly, reaction was started by addition of 1.5-15 g of purified V 1 to 500 l of ATPase assay medium (0.05 mM Tris-HCl, pH 6.8, containing 4 mM ATP or GTP and 1.6 mM CaCl 2 or MgCl 2 ). The final metalnucleotide complex concentration in the medium was calculated for each condition by the Bound and Determined computer program (26). Incubations were performed at 37°C for the indicated times (0.5-30 min). Reactions were stopped by addition of an equal volume of 10% (w/v) SDS. Phosphate released was determined by measuring the absorbance at 700 nm immediately after addition of 0.5 ml of Taussky and Schorr reagent (10% FeSO 4 , 1.2 N sulfuric acid, 1.2% ammonium molybdate). A blank containing only assay medium was measured for each reaction. A standard calibration curve for P i was used to calculate the micromoles of P i formed. Data were analyzed using the Sigma Plot curve-fitting application program.
The amount of ATP and ADP bound to the purified V 1 preparation was determined as follows. V 1 (20 -60 g) was precipitated by addition of perchloric acid to a final concentration of 0.44 M, and after 15 min on ice, the mixture was neutralized by addition of equal volume of ice-cold fresh 0.8 M potassium bicarbonate, then incubated for an additional 20 min on ice. Soluble nucleotides were recovered in the supernatant after centrifugation. ATP and ADP concentrations were measured using the luciferin-luciferase assay in an Autolunat LB953 luminometer. ADP was converted to ATP by addition of 8.3 mM phosphoenolpyruvate and 48 g of pyruvate kinase in buffer containing 50 mM HEPES-KOH, pH 7.5, 5 mM MgCl 2 , and 20 mM KCl. Protein concentrations were determined by Lowry assay (27).
RESULTS
Purification of Cytosolic V 1 Complexes-In order to better understand how cytosolic V 1 sectors are inactivated and possibly to gain insights into how V 1 dissociation is triggered, we isolated and characterized cytosolic V 1 sectors from yeast. Cytosolic V 1 complexes from wild-type yeast cells and vma3⌬ mutant cells, treated both with and without a brief glucose deprivation, were purified by a variation of the methods reported by Graf et al. (14). In wild-type cells, the population of V 1 complexes in the cytosol was increased by depriving the cells of glucose for 5 min. This treatment has been shown to trigger dissociation of approximately 75% of assembled V 1 V o complexes (13,15). vma3⌬ mutant cells lack the gene encoding the proteolipid subunit c of the V o sector (28) and provided an alternative source of V 1 . vma3⌬ mutants do not form stable V o sectors, but assemble stable V 1 complexes constitutively present in the cytosol (11,23). Wild-type and vma3⌬ cells were converted to spheroplasts and osmotically lysed. The purification procedure consisted of four fractionation steps and yielded ϳ0.05-0.1 and ϳ0.4 -0.5 mg of V 1 from 2 liters of log-phase culture of wild-type and vma3⌬ cells, respectively. Glucose deprivation improved the yield of V 1 sectors from wild-type cells, but did not significantly affect the yield of V 1 sectors from the vma3⌬ cells. Briefly, the purification consisted of isolation of a soluble fraction by high speed centrifugation, followed by protein precipitation with 50% ammonium sulfate and two sequential chromatographic columns: ion exchange on a Mono-Q column and gel filtration on a Biosilect Sec-400 column. Fig. 1A shows the protein elution profile from the ion exchange column, and Fig. 1B is a Western blot analysis of the elution pattern of A, B, and C V 1 subunits. The C subunit failed to bind to the column even at the lowest salt concentration and fractionated away from the rest of the cytosolic V 1 subunits. Because the A and B subunits were detected throughout the gradient, we assessed whether they were assembled with other subunits by nondenaturing immunoprecipitation of selected pooled fractions (not shown). Assembled V 1 complexes were present only in fractions eluted with 0.2 M salts (0.1 M NaCl, 0.1 M KCl); A and B subunits that eluted elsewhere from the column were either partially or fully dissociated from the remaining V 1 subunits. The assembled V 1 complexes eluted from the Mono-Q column (Fig. 1B, fractions 43-54) were concentrated and subjected to gel filtration chromatography. The elution profile for the gel filtration column is shown in Fig. 1C. The V 1 complexes eluted in a single peak (Fig. 1D) with an estimated molecular mass of 445 kDa.
Purified V 1 complexes from wild-type and vma3⌬ cells, either with or without glucose deprivation, showed a similar subunit composition. The presence of the 27-kDa E subunit in the V 1 complexes was confirmed by Western blotting (Fig. 1D). Silver staining of the peak eluted from gel filtration column (Fig. 2) showed additional bands of 32, 16, and 14 kDa that correspond in molecular mass to the previously identified D, G, and F subunits, respectively (4). These data indicate that the V 1 complexes obtained from both strains contained the A, B, D, E, F, and G V 1 subunits (Figs. 1D and 2). Both the E and G subunit had a somewhat smeared appearance in the V 1 preparations. In addition to the previously characterized V 1 subunits, bands of approximately 25 and 80 kDa and several high molecular mass bands were consistently present in the fractions containing the V 1 complexes. We have not yet determined whether these proteins are associated with cytosolic V 1 complexes.
We were particularly interested in determining whether the H subunit, encoded by the VMA13 gene in yeast (29), was associated with the cytosolic V 1 complexes. This protein has a molecular mass of 54 kDa and is often masked by the 60-kDa subunit, so the VMA13 gene was tagged with a Myc epitope to allow it to be clearly identified. The tagged protein was expressed in a vma13⌬ yeast strain and shown to fully complement the growth defects of the strain. Co-purification of subunit H with cytosolic V 1 complexes was confirmed using anti-Myc antibodies against V 1 complexes purified from vma13⌬ cells expressing the Myc-tagged VMA13 gene (Fig. 3). Therefore, the cytosolic V 1 sectors appear to contain all the previously characterized V 1 subunits except subunit C.
Enzymatic Activities of Cytosolic V 1 Complexes-The isolated V 1 domain of the M. sexta V-ATPase is not active as a MgATPase except in the presence of organic solvents (14). Similarly, the isolated yeast V 1 complexes did not hydrolyze ATP if the divalent cation supplied was Mg 2ϩ . In an attempt to activate the yeast V 1 ATPase activity, the purified V 1 was treated with 25% methanol, 30 mM octylglucoside, 5-50 mM sodium sulfite, 0.5% N,N-dimethyldodecylamine-N-oxide, and 5-10 mM dithiothreitol, treatments which had effectively activated the FIG. 1. Purification of cytosolic V 1 sectors. A and B, ion exchange chromatography of yeast cytosol. A, supernatant protein obtained after high speed centrifugation of a yeast cell lysate was precipitated with 50% ammonium sulfate, desalted, and applied to a Mono-Q2 ion exchange column as described under "Experimental Procedures." After an initial wash, proteins were eluted from the column with three sequential linear gradients. Protein concentration was monitored by measuring absorbance at 280 nm (A 280 ). A profile of the stepwise salt gradient used is superimposed on the protein elution profile and is described in more detail under "Experimental Procedures." B, the indicated fractions collected from the chromatogram shown in A were precipitated with 10% trichloroacetic acid, solubilized, and separated by SDS-PAGE, and blotted to nitrocellulose. The blot was probed with mouse monoclonal antibodies 7A2, which recognizes the C subunit, 13D11, which recognizes the B subunit, and 8B1, which recognizes the A subunit, followed by alkaline-phosphatase-conjugated goat anti-mouse antibodies (11). The fractions containing assembled V 1 complexes were identified by nondenaturing immunoprecipitation with monoclonal antibody 8B1 as described under "Experimental Procedures." C and D, isolation of cytosolic V 1 complexes by gel filtration. C, fractions 43-54 from the ion exchange chromatography column shown in A were pooled, concentrated, and loaded on a Bio-Rad Sec400 gel filtration column. Protein concentration in fractions eluted from the column was monitored by measuring the A 280 . D, V 1 subunits eluted from the gel filtration column in a single peak, centered at fraction 20. The indicated fractions were subjected to SDS-PAGE and immunoblotting. The A and B subunits were recognized with monoclonal antibodies 8B1 and 13D11 as described above, and the E subunit was recognized by polyclonal antiserum raised against the yeast E subunit (generously provided by Dr. Tom Stevens).
Vma13p Inhibits ATPase Activity of Cytosolic V 1 Complexes
MgATPase activity of the Manduca V 1 (14) or F 1 -ATPases from various sources (30 -33). None of these treatments elicited any MgATPase activity in the yeast V 1 complexes.
Cytosolic V 1 complexes from both wild-type and vma3⌬ cells did hydrolyze ATP in a Ca 2ϩ -dependent manner at nonphysiological (mM) Ca 2ϩ concentrations, however. CaATPase activity has been described in purified V 1 complexes from M. sexta (14), isolated chloroplast and Bacillus firmus F 1 complexes (30,31), and reconstituted mixtures of bovine V 1 subunits (18 -21). The enzymatic properties of complexes purified from glucose-deprived wild-type cells or vma3⌬ cells with or without glucose deprivation were very similar. Because vma3⌬ cells provided a more abundant source of cytosolic V 1 than wild-type cells, the kinetic analysis described below was performed on cytosolic V 1 complexes isolated from vma3⌬ mutant cells.
CaATP hydrolysis was first examined as a function of the time at a constant CaATP concentration (1.4 mM). When the incubation time was varied from 0.5 to 20 min, the plot of micromole of P i formed versus time had a hyperbolic shape showing a rapid initial rate that decayed until there was little further ATP hydrolysis after 3 min (Fig. 4A). The initial activity, detected at 1 min, was 1.7 mol of P i /min/mg. At a lower CaATP concentration (0.3 mM), it took longer for the activity to decay, but the activity was gone by 20 min. An apparent K m of 0.183 mM for CaATP, which is similar to the K m of yeast V 1 V o (0.210 mM; Ref. 34) was estimated from 1-min reactions performed at a larger range of concentrations. Based on this information, 1.4 mM CaATP should nearly saturate the enzyme, and the loss of activity over time seen in Fig. 4A cannot be attributed to substrate depletion.
The substrate specificity of the yeast V 1 complexes was examined. Ca 2ϩ -dependent hydrolysis of GTP was observed (Fig. 4B). Interestingly, CaGTP hydrolysis was linear for at least 20 min under conditions where ATP hydrolysis had ceased after 3 min. The V 1 complexes exhibited a specific activity for GTP hydrolysis of 0.47 mol/min/mg of protein. Once again the hydrolysis was Ca 2ϩ -dependent; Mg 2ϩ did not support any GTPase activity in the cytosolic V 1 complexes.
Loss of activity over time could be an indication of product inhibition of the ATPase activity. Product inhibition has been studied in considerable detail in F 1 -ATPases, and appears to be specific to ADP in many cases (35)(36)(37). Thus, although GTP is a substrate for F 1 , GDP is much less efficient in product inhibition (36). To further explore the possibility that the ATPase activity of the cytosolic V 1 complexes was inhibited by ADP, V 1 complexes were preincubated in the presence of CaADP before measurement of the CaATPase activity. Isolated V 1 complexes were preincubated with 1.1 mM CaADP for 1 min, then the V 1 -CaADP mixture was diluted 30-fold into assay medium and the CaATPase activity measured in the presence of 1.4 mM CaATP (Fig. 4A). The CaATPase activity of the V 1 complexes was not fully inhibited by CaADP preincubation. The initial ATPase activity was 51% that in the absence of ADP, and a decay in ATPase activity similar to that seen in the absence of ADP preincubation was observed over the next 3 min. We also determined whether the cytosolic V 1 preparation contained tightly bound nucleotides after isolation that might be involved in inhibition of either Mg 2ϩ -dependent or Ca 2ϩ -dependent ATPase activity. Only substoichiometric amounts of ADP (0.02 mol/mol V 1 ) and ATP (0.005 mol/mol V 1 ) were detected in the isolated cytosolic V 1 complexes. Pyrophosphate was shown to enhance MgATPase activity of F 1 -ATPases by removing tightly bound nucleotides (36). However, addition of 4.8 -9.4 mM PPi did not activate the ATPase activity of cytosolic yeast V 1 .
The sensitivity of the yeast cytosolic V 1 to a variety of inhibitors was examined. The CaATPase activity of the cytosolic V 1 was not affected by addition of the specific P-and F-type ATPases inhibitors sodium orthovanadate (1 mM) and sodium azide (10 mM), respectively. Concanamycin A, a specific V-type ATPase inhibitor believed to interact with the V o domain at the membrane (38) had no effect on the ATPase activity of purified V 1 . V-ATPases contain a set of three conserved cysteine residues that are essential for activity and render the enzyme sensitive to low concentrations of N-ethylmaleimide (39). Measuring N-ethylmaleimide sensitivity of the cytosolic V 1 sectors was difficult because the presence of reducing agent (-mercaptoethanol) appeared to be essential for purification of an active V 1 and maintenance of its activity. However, addition of N-ethylmaleimide in excess to the concentration of -mercaptoethanol in the isolated V 1 preparation allowed us to estimate FIG. 3. Cytosolic V 1 complexes contain the H subunit. V 1 complexes were isolated from vma13⌬ mutant cells bearing a Myc epitopetagged VMA13 gene on a low copy plasmid as described in the legend to Fig. 1 and under "Experimental Procedures." The indicated fractions from the gel filtration column were separated by SDS-PAGE and subjected to immunoblotting. The blot was probed with monoclonal antibodies 8B1 against the A subunit and 9E10 against the myc epitope attached to the H subunit.
FIG. 4. CaATPase and CaGTPase activity of cytosolic V 1 complexes.
A, cytosolic V 1 complexes were isolated from vma3⌬ cells, and 11.2 g of the isolated complexes were incubated with 1.4 mM CaATP (open circles) in a 500-l total volume for the indicated times. P i release was monitored by colorimetric assay (25). CaATPase activity was also measured after a 1-min preincubation of the complexes with 1.1 mM CaADP followed by a 30-fold dilution into assay buffer (500 l final volume) containing 1.4 mM CaATP (closed circles). B, cytosolic V 1 complexes (11.2 g) were incubated with 1.4 mM CaGTP (closed circles) in a 500-l total volume for the indicated times (closed circles), and phosphate release was monitored as described above. CaATPase activity from A (open circles) is shown for comparison.
Vma13p Inhibits ATPase Activity of Cytosolic V 1 Complexes
an IC 50 of 0.5 mM for N-ethylmaleimide.
Purification and Characterization of Cytosolic V 1 Complexes Lacking the H Subunit-To address the function of the H subunit in cytosolic V 1 complexes, we isolated the V 1 complex from a vma13⌬ mutant strain. vma13⌬ cells assemble unstable and inactive V 1 V o complexes at the vacuolar membrane (29). vma13⌬ cells were briefly deprived of glucose (5 min in YEP) and the cytosolic V 1 complexes purified as described previously. As shown in Fig. 5, V 1 complexes isolated from vma13⌬ cells showed the same subunit composition as V 1 from wild-type and vma3⌬ cells with the exception of the loss of the H subunit, which runs just below the 60-kDa B subunit.
Overall CaATPase activity of V 1 complexes missing H subunit was higher than that of V 1 complexes from wild-type and vma3⌬ cells, primarily because the kinetics of ATP hydrolysis were linear for almost 20 min. The initial specific activity was 1.8 -4.0 mol of P i /min/mg in two different preparations (Fig. 6), only slightly higher than that seen in the V 1 complexes isolated from vma3⌬ cells. Linearity of the reaction over as much as 20 min suggested little or no product inhibition occurred during hydrolysis by the cytosolic V 1 complexes lacking subunit H. To better understand the lack of decay in the catalytic activity, we repeated the ADP preincubation experiment shown in Fig. 4A with the complexes from vma13⌬ cells. The cytosolic V 1 complex from vma13⌬ cells was preincubated with 1.1 mM CaADP and its ATPase activity measured as described above (Fig. 6B). After preincubation with ADP, the enzymes initial activity was 66% that without preincubation. V 1 complexes isolated from vma3⌬ cells retained 51% of the initial activity, so complexes from vma13⌬ cells were only slightly less sensitive to ADP inhibition. The kinetics of hydrolysis remained nearly linear over a 30-min period, however. These results indicate that in the absence of subunit H, the cytosolic V 1 complexes can still be inhibited by preincubation with ADP, but they still do not experience the decay of activity over time that may be an additional effect of product ADP in the complexes containing subunit H.
The cytosolic V 1 complexes lacking subunit H also exhibited some MgATPase activity, even in the absence of activating agents. Kinetics of MgATP hydrolysis revealed an initial specific activity of 1.2 mol of P i /min/mg measured at 1 min (Fig. 7). Activity drastically decreased after the first 5 min. An apparent specific activity of 0.265 mol of P i /min/mg was measured after 15 min in assay medium containing an initial concentration of 1.4 mM MgATP. Potential activators were added during a 15-min incubation, and their effects on the MgATPase activity are shown in Table I. Methanol (25% v/v) gave an almost 2-fold increase in the apparent specific activity, but octylglucoside and sodium sulfite proved to be somewhat inhibitory.
Subunit Composition and Enzymatic Activities of Cytosolic V 1 Complexes-
The yeast cytosolic V 1 complexes contain established V 1 subunits A, B, D, E, F, G, and H. Subunit C was the only V 1 subunit not associated with yeast cytosolic V 1 complexes; this subunit was present in a high speed supernatant, but fractionated away from the other V 1 subunits in ion exchange chromatography. Earlier immunoprecipitation experiments had indicated that the C subunit dissociated from both the V 1 and V o sectors during V-ATPase disassembly (13). The subunit composition of the yeast cytosolic V 1 complexes closely resembles that of the cytosolic V 1 -ATPase complexes from M. sexta (14,40). The M. sexta complexes appear to contain subunits A, B, D, E, F, and G, along with substoichiometric amounts of the C subunit (40). Subunit H has only recently been identified in M. sexta (17), so it is unclear whether this subunit is really not present in the insect cytosolic V 1 complexes, or is hidden by the B subunit, which generally runs very close to the H subunit on SDS-PAGE.
Both the insect and yeast cytosolic V 1 complexes are active as CaATPases, indicating that this activity does not require the presence of the C subunit. There were other striking similarities in the activities of the two enzyme preparations. Both showed a loss of CaATPase activity over time and a lower level of CaGTPase activity that did not decay over time. As noted by Graf et al. (14), these features are also shared by isolated F 1 sectors from chloroplasts and B. firmus (30,31). One difference between the insect and yeast V 1 preparations is the activation of MgATPase activity in the insect enzyme in the presence of 25% methanol; no MgATPase activity was observed for the wild-type yeast V 1 preparation, even in the presence of a wide variety of potential activating agents. A shift from Mg 2ϩ -dependent to Ca 2ϩ -dependent ATP hydrolysis has been observed in V 1 subunit reconstitution experiments of the bovine clathrin-coated vesicle V-ATPase, as well. These subunit reconstitution experiments had indicated an essential role for the C subunit in CaATP hydrolysis (19), however, and this appears to conflict with results from the native cytosolic V 1 preparations.
We had anticipated that cytosolic V 1 sectors isolated from wild-type yeast cells and vma3⌬ mutant cells before and after glucose deprivation might show differences in subunit composition that reflected their different histories and provided indications as to how glucose deprivation signals V 1 dissociation. This did not prove to be the case, at least at the level of analysis reported here. Both the subunit composition and basic enzymatic properties of the cytosolic V 1 sectors from different sources were very similar. It may still be that there are subtle differences in post-translational modifications of the subunits that we have not yet identified, and we plan to look at the different preparations in more detail in the future. It is also possible, however, that cytosolic V 1 sectors from wild-type cells before and after glucose deprivation have the same structure, and that different amounts of V 1 are present because the equilibrium of an ongoing dissociation and reassociation is shifted when glucose becomes limiting. Along the same lines, the cytosolic V 1 sectors that are formed in vma3⌬ cells but never attach to the membrane may be stable because they resemble the cytosolic wild-type V 1 sectors that are normally cycling on and off the membrane.
How Is Mg 2ϩ -dependent ATP Hydrolysis by Cytosolic V 1 Sectors Silenced?-Reversible disassembly of V-ATPases has been proposed to be a mechanism of down-regulating V-ATPase activity when growth conditions are unfavorable (16,17). Under- FIG. 5. Subunit composition of cytosolic V 1 complexes isolated from vma13⌬ cells. Cytosolic V 1 complexes were isolated from wildtype and vma13⌬ mutant cells as described in the legend to Fig. 1 and under "Experimental Procedures." The V 1 peak fraction after gel filtration chromatography was subjected to SDS-PAGE and stained with Coomassie Blue. The positions of known V 1 subunits are indicated; the identities of the A, B, and E subunits were confirmed by immunoblotting.
Vma13p Inhibits ATPase Activity of Cytosolic V 1 Complexes lying this proposal is the assumption, consistent with in vitro data (6,8), that the cytosolic V 1 sectors are inactive in ATP hydrolysis. As expected, native cytosolic V 1 complexes purified from wild-type yeast cells could not hydrolyze ATP when Mg 2ϩ was provided as the divalent cation, indicating that under physiological conditions, the yeast V 1 complexes are catalytically inactive. The results reported here suggest several potential reasons cytosolic V 1 sectors are not active in vivo.
Characterization of V 1 ATPase activity from a vma13⌬ mutant suggests that the H subunit may play an important role in inhibiting both Mg 2ϩ -and Ca 2ϩ -dependent ATP hydrolysis by cytosolic V 1 sectors. An inhibitory role for the H subunit was not expected from previous data. The yeast vma13⌬ mutant, which lacks the H subunit, assembles V 1 V o complexes in the membrane (29), but the complexes are unstable and inactive. Addition of sub-57-kDa dimer, which consists of two isoforms of the H subunit, to a V 1 complex reconstituted from bovine clathrin-coated vesicle subunits enhances CaATPase activity of the complexes, and sub-57-kDa dimer or either of the individual H subunit isoforms appears to be essential for MgATPase activity and proton pumping by the fully assembled bovine clathrincoated vesicle pump (21,41). Taken together, these data have suggested that the H subunit may act as an activator, not an inhibitor, of V-ATPase activity, but these experiments have focused predominantly on the intact V 1 V o complex, not isolated V 1 sectors. The experiments presented here suggest that the H subunit may play a role more similar to that of the ⑀ subunit of the E. coli F-ATPase, which inhibits the F 1 -ATPase when it is detached from the membrane (33), but may be critical for proper structural and function coupling of F 1 and F o (42,43).
Comparison of CaATP hydrolysis by cytosolic V 1 sectors with and without the H subunit (Fig. 6A) indicates that the H subunit may be particularly critical for the decay in ATP hydrolysis rate after the first few minutes of turnover. Cytosolic V 1 complexes from vma13⌬ cells showed only a slightly higher initial rate than those from vma3⌬ cells, but they were able to maintain this rate for at least 20 min, under conditions where complexes from vma3⌬ cells were almost completely inactive after less than 5 min. The higher activity of the complexes from vma13⌬ cells could not be attributed to loss of another V 1 subunit, because all of the subunits except subunit H appeared to be present in these complexes. It is even more intriguing that cytosolic V 1 complexes lacking the H subunit appear to exhibit some Mg 2ϩ -dependent ATP hydrolysis that could be further activated in the presence of methanol. As described above, cytosolic V 1 sectors from M. sexta, which may or may not contain an equivalent of the H subunit, exhibited methanolactivated MgATPase activity in the cytosolic V 1 sectors. However, it is notable that the M. sexta enzyme appeared to lose Ca 2ϩ -dependent activity under conditions where it gained Mg 2ϩdependent activity, but both Ca 2ϩ and Mg 2ϩ -dependent activities appear to be activated in the cytosolic V 1 complexes from vma13⌬ cells. This result suggests that the methanol does not FIG. 6. CaATPase activity of cytosolic V 1 complexes from vma13⌬ cells. A, cytosolic V 1 complexes were isolated from vma13⌬ cells, and 10.3 g of the isolated complexes were incubated with 1.4 mM CaATP in a 500-l total volume for the indicated times (closed circles). The activity from 11.2 g of V 1 complexes isolated from vma3⌬ cells and assayed under identical conditions is shown for comparison (open circles). P i release was monitored by colorimetric assay (25). B, CaATPase activity was also measured after a 1-min preincubation of the complexes with 1. Cytosolic V 1 complexes were isolated from vma13⌬ cells, and 1.5 g of the isolated complexes were incubated with 1.4 mM MgATP in a 500-l total volume for the indicated times. P i release was monitored by colorimetric assay (25). Vma13p Inhibits ATPase Activity of Cytosolic V 1 Complexes act on the M. sexta enzyme simply through release of the H subunit or a functional equivalent. The MgATPase activity of yeast vma13⌬ complexes showed a loss of activity with time similar to that seen for the CaATPase activity of cytosolic V 1 sectors from wild-type cells, indicating that cytosolic V 1 sectors are still prevented from exhibiting high levels of unproductive ATP hydrolysis in vma13⌬ cells in vivo. These data suggest that the H subunit is important in inactivating cytosolic V 1 -ATPase activity, but there are probably other silencing mechanisms that act in combination, as described below.
One of these other mechanisms may be inhibition by ADP. The data presented here suggest that ADP could play a rather complex role in inhibiting the activity of cytosolic V 1 complexes. The loss of CaATPase activity in the wild-type complexes or MgATPase activity in the vma13⌬ complexes over time could have at least two explanations. First, enzyme activity may destabilize the complex so that one or more subunits is lost, inactivating the enzyme. We cannot eliminate this possibility at present, but it should be possible to address it by careful determination of the subunit composition before and after catalysis. Alternatively, the loss of activity is suggestive of product inhibition, which could also be an effective means of minimizing unproductive ATP hydrolysis in vivo. Similar behavior of the M. sexta cytosolic V 1 has been attributed to product inhibition (14), and a more detailed analysis of the V 1 -ATPase of Thermus thermophilus (44) has clearly demonstrated that this enzyme can be inactivated during ATP hydrolysis by entrapping an inhibitory MgADP at the catalytic site. In both of these cases, the similarity to entrapping of MgADP by F 1 -ATPases has been noted, but there are also some important differences between the behavior of the yeast cytosolic V 1 complexes and F 1 -ATPases (32,(35)(36)(37). First, briefly preincubating the yeast cytosolic V 1 with 1 mM CaADP gave only a partial inhibition of the initial CaATPase activity and did not appear to accelerate its decay. The extent of inhibition due to CaADP preincubation was similar in cytosolic V 1 complexes with or without the H subunit, even though the complexes without the H subunit showed much less inactivation during hydrolysis. Second, a number of activating agents that are believed to act by stimulating release of MgADP entrapped at a catalytic site of F 1 -ATPases, for example, sulfite (32), do not have any effect on the Ca 2ϩ -dependent activity of the yeast cytosolic V 1 . Perhaps most importantly, entrapment of a tightly bound MgADP or MgATP that persists through our purification protocol cannot account for the lack of MgATPase activity in cytosolic V 1 complexes from wild-type cells because the complexes as isolated are almost completely devoid of ADP and ATP. These data indicate that there may be at least two inhibitory effects of ADP: one type of inhibition depends on the formation of ADP during turnover and does not occur in complexes lacking the H subunit and the second type can be seen after a brief preincubation with ADP and occurs in complexes with and without the H subunit. The switch from MgATPase activity to CaATPase activity in the cytosolic V 1 complexes cannot be easily accounted for by the tighter binding of an inhibitory MgADP, unless this binding is so rapid and so tight that it occurs before significant MgATP hydrolysis can be observed. Complex effects of ADP on the V-ATPase of bovine clathrin-coated vesicles have been reported previously (45). Further experiments will be necessary to characterize the mechanisms of ADP inhibition of the yeast cytosolic V 1 complexes and fully assess their physiological significance.
The data presented here suggest that the inhibitory H sub-unit and inhibition by product ADP may play important roles is silencing unproductive hydrolysis by cytosolic V 1 complexes in yeast, but it is important to emphasize that they do not exclude other mechanisms of silencing. We have demonstrated release of the C subunit from cytosolic V 1 sectors, but have not yet determined whether this release plays a functional role. We have not yet assessed whether there are post-translational modifications of any of the V 1 subunits when they are released from the membrane, but with the purification protocol developed here, we are poised to determine both whether there are reversible modifications and whether these modifications affect activity. Silencing cytosolic V 1 complexes in vivo is likely to be a synergistic effect rather than a simple event. Considering that inhibition of cytosolic V 1 complexes is vital, it would not be surprising if cells had more than one mechanism to lock the catalytic conformation of the complex and prevent futile ATP hydrolysis. | 2018-04-03T06:16:05.627Z | 2000-07-14T00:00:00.000 | {
"year": 2000,
"sha1": "afb7d4a24cce5210d548bd3aa797f1e7773e408b",
"oa_license": "CCBY",
"oa_url": "http://www.jbc.org/content/275/28/21761.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "bb6d2c4e7530e3f33154eb5592a3f23a168b3ae7",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
270989862 | pes2o/s2orc | v3-fos-license | Recent Advances in the Determination of Major and Trace Elements in Plants Using Inductively Coupled Plasma Optical Emission Spectrometry
Interest in measuring major and trace elements in plants has increased in recent years because of growing concerns about the elements’ contribution to daily intakes or the health risks posed by ingesting vegetables contaminated by potentially toxic elements. The recent advances in using inductively coupled plasma atomic emission spectrometry (ICP-OES) to measure major and trace elements in plant samples are reviewed in the present work. The sample preparation before instrumental determination and the main advantages and limitations of ICP-OES are described. New trends in element extraction in liquid solutions using fewer toxic solvents and microextractions are observed in recently published literature. Even though ICP-OES is a well-established and routine technique, recent innovations to increase its performance have been found. Validated methods are needed to ensure the obtaining of reliable results. Much research has focused on assessing principal figures of merit, such as limits of detection, quantification, selectivity, working ranges, precision in terms of repeatability and reproducibility, and accuracy through spiked samples or certified reference materials analysis. According to the published literature, the ICP-OES technique, 50 years after the release of the first commercially available equipment, remains a powerful and highly recommended tool for element determination on a wide range of concentrations.
Introduction
The determination of mineral elements is a critical aspect of chemical analysis in most types of samples.Even though about three-quarters of the elements in the periodic table are metals, only several are known as being essential for living organisms due to their biochemical role in the human body.Elements like Ca, K, P, Na, and Mg are major essential elements, whereas elements such as Fe, Zn, Mn, Cu, and Se are trace essential elements [1][2][3][4].The deficiencies of these elements may cause malfunctioning of organisms, while their concentration above certain thresholds may negatively affect the organism's health [5,6].Other elements (Cd, Pb, Hg, As, and Sr) with no known biological function represent a health risk even at low concentrations [7][8][9].
Amongst the most widespread methods for determining major and trace elements, spectrometric methods based on inductively coupled plasma (ICP), like inductively coupled plasma-optical emission spectroscopy (ICP-OES) and inductively coupled plasma-mass spectrometry (ICP-MS), are known for their robustness, low detection limits, good accuracies, large linear ranges of concentrations, low detection limits, and multielement determination capability [10][11][12][13][14].The robustness of the ICP-OES technique is also demonstrated by the fact that 50 years after the first commercial instrument appeared [10], it remains one of the most widely used techniques for determining major and trace elements in various types of liquid and solid samples.In ICP-OES, plasma is used to emit photons with characteristic wavelengths for each analyzed element, ensuring thus the element's identification, while the intensity of emitted radiation is proportionally linked to the concentration of the analyte.The samples typically need to be introduced into the plasma in their liquid form by nebulization.Therefore, the analysis of vegetables implies their digestion to bring analytes from their solid matrix into a liquid aqueous solution.In the case of vegetable analysis by ICP-OES, the sample preparation usually includes several steps, such as washing/cleaning, drying, crushing, sieving, digestion using a mixture of acids, filtration, and then measurement by ICP-OES [15].Thus, plant sample preparation before ICP-OES analysis has been extensively studied.
Given the general tendency in analytical chemistry to achieve greener methodologies, determining chemical elements by inductively coupled plasma-based techniques should become as environmentally friendly as possible.Anastas [16] first drew attention to the necessity of adapting analytical methodologies to the requirements of green chemistry.Nowak et al. [17] introduced the concept of white analytical chemistry, while Gałuszka et al. [18] formulated the 12 main principles of green analytical chemistry to protect the environment and analysts while performing analytical procedures.The primary strategy of this concept comprises reducing the stages of analytical procedures, performing on-site analysis using portable instruments, replacing or eliminating toxic reagents in sample preparation or sampling procedures, minimizing the use of energy, performing multi-parameter analysis, and increasing the safety of analysts [18,19].Thus, one of the aims of this review was also to assess the possible integration of metals analysis in vegetables by ICP-OES in the principles of green analytical chemistry, both from the instrumentation and from sample preparation points of view.
This review aims to present comprehensive information on the new trends and findings from ICP-OES application in plant analysis.It emphasizes the necessity and importance of ensuring appropriate quality control in plant analysis and characterization.The literature published in the last ten years was mostly considered.In ICP-OES, plasma is used to emit photons with characteristic wavelengths for each analyzed element, ensuring thus the element's identification, while the intensity of emitted radiation is proportionally linked to the concentration of the analyte.The samples typically need to be introduced into the plasma in their liquid form by nebulization.Therefore, the analysis of vegetables implies their digestion to bring analytes from their solid matrix into a liquid aqueous solution.In the case of vegetable analysis by ICP-OES, the sample preparation usually includes several steps, such as washing/cleaning, drying, crushing, sieving, digestion using a mixture of acids, filtration, and then measurement by ICP-OES [15].Thus, plant sample preparation before ICP-OES analysis has been extensively studied.
Plants Samples Preparation for ICP-OES Analysis
Given the general tendency in analytical chemistry to achieve greener methodologies, determining chemical elements by inductively coupled plasma-based techniques should become as environmentally friendly as possible.Anastas [16] first drew a ention to the necessity of adapting analytical methodologies to the requirements of green chemistry.Nowak et al. [17] introduced the concept of white analytical chemistry, while Gałuszka et al. [18] formulated the 12 main principles of green analytical chemistry to protect the environment and analysts while performing analytical procedures.The primary strategy of this concept comprises reducing the stages of analytical procedures, performing on-site analysis using portable instruments, replacing or eliminating toxic reagents in sample preparation or sampling procedures, minimizing the use of energy, performing multi-parameter analysis, and increasing the safety of analysts [18,19].Thus, one of the aims of this review was also to assess the possible integration of metals analysis in vegetables by ICP-OES in the principles of green analytical chemistry, both from the instrumentation and from sample preparation points of view.
This review aims to present comprehensive information on the new trends and findings from ICP-OES application in plant analysis.It emphasizes the necessity and importance of ensuring appropriate quality control in plant analysis and characterization.The literature published in the last ten years was mostly considered.
Plants Samples Preparation for ICP-OES Analysis
Figure 1 presents a schematic representation of the analytical steps required for determining major and trace elements in plants by ICP-OES.Sample preparation is a critical issue for obtaining representative and consistent results.Firstly, the samples should be representative of the intended study.Depending on the scope, the edible part should be collected [20][21][22].Next, the samples must be cleaned and washed with tap water and then distilled/deionized water to eliminate dust and soil-adhering particles [23,24].To obtain the dry mass of plant samples, these can be dried using different approaches, like air-drying [23,25] for days to weeks, drying in an oven at constant temperature for hours to days until constant weight [26][27][28][29], or freeze-drying [22].
When dried in the oven, generally temperatures of 50-80 • C are used to ensure faster water evaporation and low enough temperatures to avoid possible loss of the analytes.The dried samples are powdered with grinders, blenders, and agate/porcelain mortars and pestles [30,31].The plant sample powder obtained by grinding is often directly digested, while other authors have sieved the powders before digestion [32,33].
Different approaches have been developed and presented in the literature to extract analytes from solid plant samples to liquid solutions.These are typically focused on five methodologies: (1) wet acid digestion; (2) combustion, followed by ash acid digestion; (3) extraction to liquids with complexing chemicals; (4) extraction in simulated body fluids for bioaccessibility studies; and (5) extraction using non-toxic solvents or use of microextraction, on trend with greener sample preparation methods.These approaches are summarized in Figure 2.
and washed with tap water and then distilled/deionized water to eliminate dust and s adhering particles [23,24].To obtain the dry mass of plant samples, these can be dr using different approaches, like air-drying [23,25] for days to weeks, drying in an oven constant temperature for hours to days until constant weight [26][27][28][29], or freeze-dry [22].When dried in the oven, generally temperatures of 50-80 °C are used to ensure fas water evaporation and low enough temperatures to avoid possible loss of the analy The dried samples are powdered with grinders, blenders, and agate/porcelain mort and pestles [30,31].The plant sample powder obtained by grinding is often directly gested, while other authors have sieved the powders before digestion [32,33].
Different approaches have been developed and presented in the literature to extr analytes from solid plant samples to liquid solutions.These are typically focused on fi methodologies: (1) wet acid digestion; (2) combustion, followed by ash acid digestion; extraction to liquids with complexing chemicals; (4) extraction in simulated body flu for bioaccessibility studies; and (5) extraction using non-toxic solvents or use of micro traction, on trend with greener sample preparation methods.These approaches are su marized in Figure 2. The commonly used methodologies for element extraction from solid plants based on matrix digestion.This can be carried out directly on powdered plant samp using oxidizing acids to destroy organic ma er and minimize spectral interferences.Ni acid is frequently used due to its oxidizing role and because some elements form solu nitrates.Also, mixtures of HNO3 with H2O2, HCl, HClO4, HF, or H2SO4 are used for sam mineralization [24,34,35].Depending on the matrix and the analyte of interest, differ optimizations of the composition of the mixtures used for digestion and the conditions wet digestion were carried out.Good digestion efficiency is obtained if the organic co ponents of the samples are removed.In this sense, sample combustion prior to acid traction can be employed, even though this involves supplementary steps and is a possi source of contamination.The commonly used methodologies for element extraction from solid plants are based on matrix digestion.This can be carried out directly on powdered plant samples using oxidizing acids to destroy organic matter and minimize spectral interferences.Nitric acid is frequently used due to its oxidizing role and because some elements form soluble nitrates.Also, mixtures of HNO 3 with H 2 O 2 , HCl, HClO 4 , HF, or H 2 SO 4 are used for sample mineralization [24,34,35].Depending on the matrix and the analyte of interest, different optimizations of the composition of the mixtures used for digestion and the conditions for wet digestion were carried out.Good digestion efficiency is obtained if the organic components of the samples are removed.In this sense, sample combustion prior to acid extraction can be employed, even though this involves supplementary steps and is a possible source of contamination.
Wet Acid Digestion of Plant Samples for Metals Determination
Table 1 provides examples of wet acid digestion procedures for element extraction from plant samples before their instrumental determination from selected literature published from 2014 to 2024.Even though acid-wet digestion can be performed on a hot plate or in closed microwave systems, microwave-assisted digestion was chosen in most studies.The use of microwave conditions with closed vessels has several advantages, since the time for digestion is shorter, while the contamination or the loss of analytes is minimized.Moreover, the high pressure and temperature obtained in closed vessels contribute to the degradation of organic matter; thus, the combustion step is not necessary.On the other hand, a lower mass of sample can be digested in closed vessels, typically in the range of 0.1-0.5 g, because the high amount of organic matter increases the pressure in the closed vessels.Conversely, heating on a hotplate in an open vessel has the advantage of digesting higher amounts of sample (reported up to 5-10 g) [39], which represents an advantage in analyzing a more representative sample and in obtaining lower limits of quantification, which is an essential aspect in the measurement of trace elements by ICP-OES.
Even though, in some cases, only HNO 3 or mixtures of mineral acids were used for digestion [7,32,41,52,67], in most of the published papers, H 2 O 2 was used as an oxidant agent for the wet digestion of organic matter [13,[68][69][70][71][72][73].Some authors [37] used only H 2 O 2 to digest the samples, but in particular conditions: single-reaction chamber microwave system that allows temperature up to 300 • C and pressure up to 199 bars.However, digestion based on only H 2 O 2 is well in agreement with green analytical chemistry recommendations due to the low acidity of resulted solutions and residues [73].Thus, it is highly recommended for future developments.
It is important to note that in the majority of studies, there is no clear definition of metrics for evaluating the greenness of analytical methodologies.In many cases, the developed analytical methods are considered green by the authors without checking this [74].To ensure appropriate assessment, several tools have been developed to confirm if a method adheres to green analytical chemistry principles: the National Environmental Methods Index (NEMI) [75], Green Analytical Procedure Index (GAPI) [76], Complementary Green Analytical Procedure Index (ComplexGAPI) [77], Analytical Eco-Scale (AES) [78], Analytical Method Greenness Score (AMGS) [79], Analytical Greenness Metric (AGREE) [80], and the Analytical Greenness Metric for Sample Preparation (AGREEprep) [81].On the topic of metals and metalloids analysis by ICP-OES following sample digestion, the existing literature on greenness evaluation procedures is scarce.These evaluations are predominantly applied to chromatographic methods, which typically involve the use of higher quantities of chemicals [82].However, in several papers, the authors used the abovementioned tools to assess the green character of developed methods.For instance, Pereia Junior et al. [83] developed a sample preparation method for the determination of As, Ca, Cd, Cu, Cr, K, Fe, P, Pb, Mg, Mn, Na, Sr, and Zn in medicinal herbs by digestion in a closed digester block prior to ICP-OES measurement.The optimized parameters for digesting 0.10 g of a medicinal herb sample were as follows: a heating period of 120 min at 180 • C was employed, utilizing a mixture comprising 1.38 mL of 65% HNO 3 , 1.00 mL of 30% H 2 O 2 , and 2.62 mL of deionized water.The AGREE metric yielded a score of 0.63, thereby establishing the method's environmental friendliness [83].In a study by Ncube et al. [84], a microwave-assisted digestion method was developed for the determination of arsenic, cadmium, chromium, lead, and tin in pet food samples.Hydrogen peroxide was used as a digestion reagent, and subsequent metal determination was conducted using inductively coupled plasma optical emission spectrometry (ICP-OES).The AGREEprep metric instrument was employed by the authors to evaluate the method's green degree, resulting in a score of 0.76, which confirmed its green nature [84].
Combustion and Acid Digestion
Plant samples contain high amounts of organic substances, so their incineration may be very suitable for sample digestion.Practically, in this way, the organic matrix is eliminated in the form of CO 2 and H 2 O, while the remaining residue after burning represents inorganic substances that diluted mineral acids can dissolve.Table 2 shows several selected examples of combustion followed by dissolving the resulting residue for element measurement in plant samples.[88] In general, the methods based on combustion involve relatively simple equipment.The amount of the analyzed sample can be higher than in direct microwave digestion because the decomposition of organic matter is made separately, generally in open vessels, and thus does not produce high pressure.The mass of the resulting ash is much lower than that of the initial sample and can be dissolved with diluted mineral acids [55,[89][90][91].However, this process is longer than direct acid digestion, and the risk of contamination or analyte loss may appear due to the multiple steps involved.Thus, the entire procedure should be carefully conducted.
Dissolving, Complexing, and Green Extraction Methods
Even though the metals were analyzed after acid digestion in most reported studies, several papers reported the extraction of metals with different other types of reagents, or in mixtures of diluted acids.For example, Butorova et al. [25] measured the metals concentration in the ethanol/water extracts.
Deep eutectic solvents (DESs) are newly reported as environmentally friendly solvents for metal extraction from samples with organic matrices, including from plant samples.DESs involve a system formed from a hydrogen bond donor (HBD) and an acceptor (HBA) [92,93].This system decreases the melting point so that the extraction can be performed even at room temperature.The typical HBA is choline chloride, which is a natural compound.Many substances, such as tartaric, citric, benzoic, oxalic, acetic, malonic, malic, formic, maleic, succinic, adipic, boric, lactic, ascorbic, gallic, and mandelic acids; 1,4-butanediol; glycerol; sorbitol; ethylene glycol; triethylene glycol; benzamide; urea; thiourea; fructose; glucose; sucrose; and maltose have been tested as HDB [92,94].Table 3 displays some examples of metals extraction from plant samples by extraction with solvents, including DES as green solvents.
Type of Samples Digestion Method References
Al, Ca, As, Co, Cd, Cr, Fe, Cu, K, Mg, Na, Mn, P, Si, Pb, Zn 10 medicinal plant species 0.5 g of plant dried sample extracted with 20 mL ethanol/water solution (50% (v/v)) [25] Ag, Al, Ba, B, Ca, Co, Cu, Cr, Fe, Mg, Mo, Mn, Ni, Na, Pb, Ti, Sn, V, K, Zn Oil samples 5 g oil mixed with 0.5 g of DES (choline chloride and hydrogen donors: tartaric, citric, benzoic, oxalic, acetic, malonic, malic, formic, maleic, succinic, adipic, boric, lactic, ascorbic, gallic, and mandelic acids; 1;4-butanediol; glycerol; sorbitol; ethylene glycol; triethylene glycol; benzamide; urea; thiourea; fructose; glucose; sucrose; maltose [92] Ca, Cu, Ba, Na, K, Fe, Mn, Mg, Mo, Pb, Ni, Sn, V, Zn, Tobacco, lettuce 100 mg of plant sample mixed with 0.5 g of DES (choline chloride, and malic acid, 1:1) at 70 A nanocomposite compound Mg/Al-LDH@CNTs was synthetized and used as solid phase A combination of dispersive liquid-liquid microextraction using deep eutectic solvent (NADES) as extractant combined chemical vapor generation [101] Se Cereal and biofortified samples DES (choline chloride (ChCl) as hydrogen bond acceptor, and phenol (PhOH) as hydrogen bond donator) at different mole ratios of ChCl: PhOH = 1:1, 1:2, 1:3 and 1:4 [102] The number of published papers on this topic is relatively limited, while the tools for assessing the greenness of analytical methods have rarely been employed.Abellan-Martín and co-workers [101] developed a methodology for the measurement of As, Cd, Hg, and Pb in drugs by ICP OES, based on chemical vapor generation subsequent to dispersive liquid-liquid microextraction using a natural deep eutectic solvent as the extractant.An 50-fold improvement of LOQs was reported.The developed method was demonstrated to have an excellent green character using the AGREEprep metrics, as evidenced by the AGREEprep score of 0.40 [101].Sihlahla et al. [102] used alcohol-based deep eutectic solvents (DES) for sample digestion and determination of Se by ICP-OES.DES were prepared from choline chloride (ChCl) as a HBA and phenol as a HBD, in different molar ratios.A 0.1 g sample was mixed with 4 mL of the DES and shaken for 3 min using a vortex.The sample was digested for 25 min at 125 • C. Following cooling to room temperature, 4 mL of 3 M NHO 3 was added.The greenness of the method was evaluated using three metrics tools: NEMI, AES, and AGREE, and it was demonstrated that the developed protocol is an excellent green method [102].Given the paucity of existing literature on this subject, further research is required to develop more environmentally friendly techniques for the determination of metals by ICP-OES, as well as to assess their sustainability using the existing assessment tools.
Extraction for Bioaccesibility Studies on Plant Samples
The total concentration of metals in vegetal foodstuffs is not totally transferred and absorbed by the human body.Thus, recent studies on metal content in edible plants focused on assessing the fraction of the metal released into the food matrix in similar conditions to those from the gastrointestinal tract that can be transferred to the body.This portion of elements is referred to as bioaccessible concentration [103,104].Table 4 presents several examples of digestion methods used in bioaccessibility studies.The studies dealing with the bioaccessibility of metals from different plants used fresh or dried samples, from which metals are extracted in simulated body fluids (SBF) having similar pH and enzymes (pepsin, pancreatin, amylase) with those from gastrointestinal tract, and being kept for a similar time of contact (saliva, pH @ 6.8, 5 min; gastric juice pH = 2-3, 1 h; duodenal juice, pH = 6.5-7.0, 3 h) [41].This method of analyzing bioaccessible fractions of trace elements is a good surrogate of bioavailable concentration and has received acceptance [105].
Advantages, Limitations and Advances in Plasma Viewing, Sample Introduction Systems and Miniaturization of Optical Emission Spectrometry Instrumentation
The main advantage of ICP-OES is that it is capable of multielement determination over a wide range of element concentrations, making it a very productive technique compared with atomic absorption-based methods.As the main drawbacks, the limits of detection (LODs) and limits of quantification (LOQs), which are higher than in ICP-MS or GFAAS, make this sometimes not suitable for direct analysis of toxic elements in plants or vegetables used as foodstuffs due to their very low maximum admitted levels.For this reason, efforts have been made in recent years to improve the mentioned parameters by new approaches in producing plasma, sample introduction systems, plasma viewing, or detection systems [10,106,107].
Inductively coupled plasma (ICP) is generated in an inert gas (typically argon) in a torch having three concentric tubes made of quartz or ceramic, with the aid of a radiofrequency (RF) generator and an induction coil [108].Legally, authorized frequencies for plasma generators are 27.12MHz and 40.68 MHz, but the frequency of 40.68 MHz is increasingly used in modern equipment because it ensures higher plasma stability and establishes a higher central channel into the ICP, helping in the more accessible introduction of the sample, conducting to increased performance [109].
Concerning plasma viewing, there are two possibilities for observing the light emitted by the plasma: radial and axial view.Both viewing modes have advantages and disadvantages.In radial mode, the analytical signals are lower, which can lead to higher detection limits.In the case of elements found at trace concentrations, this represents a clear disadvantage.However, for major elements or elements with a high sensitivity, this is an advantage, because no dilution of sample is required.Moreover, the background signal is lower in this case; thus, the matrix effect is decreased [109].Axially viewed plasma has the advantage of collecting all the element emissions over the whole length of the plasma, and thus, the emission path length is enhanced compared to radial view [110].This has an effect on the increased sensitivity for trace elements, but this comes with the disadvantages of increased background signal and with the signal saturation for analytes in high concentrations or with high sensitivity (e.g., sodium, potassium, lithium, strontium, etc.).For these reasons, one of the advances in ICP-OES instruments was dual viewing (axial and radial).In this approach, the viewing mode can be selected for each specific element, taking advantage of the plasma viewing mode in multielement analysis.
Another advance in ICP-OES systems was made in sample introduction systems.Nebulization efficiency was improved by the development of ultrasonic nebulizers, which generate higher aerosol amounts up to 10-fold.In an ultrasonic nebulizer, the sample is injected into a piezoelectric transducer, which destroys the sample into a homogeneous fine aerosol, decreasing the limits of detections compared to a pneumatic nebulizer [111].Chemical vapor generation was another approach developed to improve the analytical performances of ICP-OES.In this technique, the analyte is extracted as a gas from the matrix, and it is selectively and more efficiently introduced into the equipment, obtaining excellent improvements in LODs [112][113][114][115].
The miniaturization of ICP-OES equipment is a growing trend in research aimed at making this analytical technique more economically sustainable and practical for onsite applications.The critical aspects of the advances in the miniaturization of ICP-OES instruments are the miniaturized components [116].Microplasma technology involves the use of microtorches with microplasmas that run at low power consumption and small gas flow rates [117][118][119].However, this is still at the research level, and future developments are needed for producing commercial equipment.
Method Validation and Performance Parameters for ICP-OES Used in Plant Sample Analysis
Because digested plant samples comprise complex matrices, ICP-OES measurements need studies on the method's performance in the validation process to obtain reliable results.In these types of samples, both spectral and non-spectral interference may occur.Other spectral wavelengths can be selected to solve the problem of spectral interferences if the sensitivity is not severely affected.Another possibility is using spectral corrections with spectrometer software, which is available for many commercial instruments.The minimization or removal of non-spectral or matrix interferences is usually obtained in three ways: (1) using "matrix matching" calibration standards for instrument calibration, (2) using the standard addition method, or (3) using internal standards added in blanks, calibration solutions and the samples, with the condition that internal standard is absent in these solutions and has a similar behavior in the plasma to the analytes [120].However, the method development step should carefully study all three methods to obtain good accuracy.Table 5 gives examples of figures of merit reported for element determination in plants using ICP-OES.
Analytes
Type of Samples Analytical Performances reference materials (SRMs) of tomato leaves and rice flour.Giraldo et al. [121] found that for Cd determination, recovery percentages for ICP-OES were similar to those by ICP-MS (over 90%).Also, similar recoveries (96.0-108.3%)were obtained by ICP-OES and ICP-MS techniques for elements analysis in CRMs after the ultrasound-assisted extraction method [122].In all studies, the reported recoveries for CRM analysis indicated good accuracies compared with legal requirements [123].
The methods were generally validated in terms of selectivity, sensitivity, limits of detection and quantification, accuracy, and precision prior to their use for real sample analysis [124].The ICP-OES is a versatile technique, mainly due to its multielement capability, with up to 70 elements measured at the same time and wide working ranges [125].
Conclusions
ICP-OES has been successfully used to analyze major and trace elements in plant samples.Although it has been 50 years since the first ICP-OES equipment was marketed, the technique remains a fascinating area of research.Of course, this technique can be applied to the analysis of many types of samples, but the analysis of plants is a niche analysis that requires special attention because of their organic matrix, as well as the need to determine trace and ultra-high concentrations.Many recently published papers deal with improving the sample preparation step.Because of the tendency in analytical chemistry to achieve greener methodologies, much research was carried out to replace or eliminate toxic reagents in sample preparation procedures.The regularly used procedures for element extraction from plant samples are based on acid digestion.This is aided by heating on a hot plate or often with microwaves.Because removing the organic matrix accomplishes an increased digestion efficiency, a supplementary step of sample combustion can be applied before acid extraction.Deep eutectic solvents are increasingly studied as environmentally friendly solvents for metal extraction prior to ICP-OES analysis.Another area of research extensively studied in the last years is the assessment of the bioaccessibility of different elements, mainly from plants used as food sources.
Regarding instrumental ICP-OES developments, many efforts have been made to lower LODs and LOQs through new plasma production methodologies, new sample introduction systems, and improvements in plasma viewing and detection systems.The miniaturization of ICP-OES instruments is a flourishing trend in research aimed at making this analytical technique more economical.
ICP-OES, a well-established technique in many laboratories, has been the focus of recent research aimed at validating ICP-OES-based methods to enhance their accuracy and precision.This comprehensive review not only brings together the recent applications of ICP-OES in various vegetable samples but also underscores its outstanding advantages.In conclusion, the ICP-OES continues to be a fascinating area of research, particularly in its potential to reduce initial and maintenance costs, and in its role in the development of greener sample preparation methodologies, a prospect that is sure to inspire our professional colleagues and researchers in the field.
Figure 1
Figure 1 presents a schematic representation of the analytical steps required for determining major and trace elements in plants by ICP-OES.
Figure 1 .Figure 1 .
Figure 1.Summary of steps for plant sample preparation process for ICP-OES determination.Sample preparation is a critical issue for obtaining representative and consistent results.Firstly, the samples should be representative of the intended study.Depending on
Figure 2 .
Figure 2. Classification of main element extraction procedures from plant powders for ICP-O analysis.
Figure 2 .
Figure 2. Classification of main element extraction procedures from plant powders for ICP-OES analysis.
Funding:
This research was funded by Ministry of Research, Innovation and Digitization through Program 1-Development of the national research & development system, Subprogram 1.2-Institutional performance-Projects that finance the RDI excellence, contract no.18PFE/30.12.2021 and through the Core Program within the National Research Development and Innovation Plan 2022-2027, carried out with the support of MCID, project no.PN 23 05.Institutional Review Board Statement: Not applicable.Informed Consent Statement: Not applicable.
Table 1 .
Examples of wet acid digestion procedures for element extraction from plant samples.Then 2 mL of HClO 4 and 2 mL of HNO 3 was added and heat up at 135 • C for 25 min.
Al, Ba, Cu, Ca, K, Fe, Na, Ni, Mg, Mn, S, P, Sr, Zn Chocolate and cocoa 1 g of sample mixed with 9 mL of HNO 3 65% and then heated in a water bath at 95 • C for 1 h, transferred, and diluted to 25 mL with deionized water[61]
Table 2 .
Examples of combustion and wet acid digestion of ashes for element extraction from plant samples.
Table 3 .
Examples of procedures for element extraction with complexing reagents and green solvents.
Table 4 .
Examples of digestion methods used in bioaccesibility studies on plant samples.
Table 5 .
Figures of merit reported in plant analysis by ICP-OES. | 2024-07-07T15:55:28.627Z | 2024-07-01T00:00:00.000 | {
"year": 2024,
"sha1": "b88e8f4234268440748fe2185c66c1761b4a491c",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3390/molecules29133169",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "4768161ae9906174e96bf3427d96f296863a5038",
"s2fieldsofstudy": [
"Environmental Science",
"Chemistry"
],
"extfieldsofstudy": []
} |
240348037 | pes2o/s2orc | v3-fos-license | Evaluation of dithiothreitol-oxidizing capacity (DOC) as a serum biomarker for chronic hepatitis B in patients exhibiting normal alanine aminotransferase levels: a pilot study towards better monitoring of disease
Background Alanine aminotransferase (ALT) is the most commonly used serum biomarker for chronic liver diseases (CLDs) but may not accurately reflect hepatic disorders and easily underestimates hepatic fibrosis. The previously revised upper limit of normal (ULN) of ALT (19 U/L for women and 30 U/L for men) increases its sensitivity but yields higher numbers of false-positives. Moreover, CLDs patients with ALT lower than the revised ULN may nonetheless have progression of disease. Therefore there is a need of novel biomarkers to complement the use of ALT. Here we have evaluated measurements of serum dithiothreitol-oxidizing capacity (DOC) in cohorts of chronic hepatitis B patients with different stages of disease as an exploratory pilot study for this purpose. Methods Serum samples obtained from healthy persons and from chronic hepatitis B patients with normal ALT values were used for sensitivity evaluation. The hepatitis B patients encompassed end-stage liver diseases (ELD), chronic hepatitis B (CHB), CHB with persistently normal ALT (CHB-P) and inactive carriers (ICs). Sensitivity was also evaluated with samples from patients with other diseases. The study period was March 2018 to December 2020. Findings DOC was found to be a robust biomarker that may become complementary to ALT measurements, especially in patients displaying low ALT levels. ROC analyses indicated that the AUC values of DOC reached 0.983 and 0.956 in ELD and CHB patients exhibiting normal ALT levels, respectively. Importantly, the AUC values of DOC reached 0.852 and 0.844 in CHB-P patients and ICs, respectively. Such AUC values permit screening and continued monitoring, corresponding to over 30% and 50% sensitivity with 99% and 95% specificity for CHB-P and ICs, respectively. DOC was also significantly correlated with indicators for fibrosis, assessing both APRI (Pearson r = 0.4905, P < 0.0001) and FIB-4 (Pearson r = 0.4421, P < 0.0001). Surprisingly, the AUC values of DOC in the hepatitis B patients with ALT levels lower than the revised ULN were not compromised. In examined non-liver diseases, DOC was low and normal, including in patients with acute myocardial infection displaying increased ALT levels. Interpretations The results suggest that DOC can be promising as a complementary biomarker used in addition to ALT for monitoring of disease in chronic hepatitis B patients, especially when ALT levels are normal. DOC should be further evaluated for possible clinical use as biomarker also in other CLDs. Funding This study was funded by the National Natural Science Foundation of China (Grant numbers: 31771971 and 32001013).
Introduction
Chronic liver disease (CLD) is highly prevalent around the world and considered a major public health problem. Current estimates suggest that 844 million people have CLDs, a higher number than for diabetes (422 million), cardiovascular (540 million) or pulmonary diseases (650 million) [1]. The most commonly used serum biomarker for CLD is alanine aminotransferase (ALT). Unfortunately, ALT serum levels do not well correlate with severity of disease and many patients with normal ALT levels are at risk of having ongoing non-discovered hepatic inflammation and fibrosis [2À6]. Therefore the upper limit of normal (ULN) for ALT, adjusted for sex differences (19 U/L for women and 30 U/L for men), was proposed to be lowered in order to increase sensitivity [2,7]. However, a revised lower ULN has not been widely adopted, as it may yield rather high numbers of false-positives [8,9]. Furthermore, CLD patients with ALT even below the revised ULN may still be severely affected by disease [10] and fibrosis was found in 27.8% of chronic hepatitis B (CHB) patients with ALT lower than the revised ULN [10]. Patients with ALT less than half the value of ULN may still have non-alcoholic steatohepatitis with significant fibrosis [5]. Therefore, new sensitive yet robust serum biomarkers for facile assessment of disease are urgently needed to complement the use of ALT, especially in patients presenting low ALT levels. Here we evaluated a new biomarker for this purpose with serum from cohorts of chronic hepatitis B patients.
Currently, approximately 3.5% of the global population is estimated to be chronically infected with hepatitis B virus (HBV) [11]. HBV infection causes various outcomes, ranging from inactive carriers (ICs) to end-stage liver disease (ELD) that includes cirrhosis, liver failure and hepatocellular carcinoma. ICs have normal ALT levels [12À14]. Many chronic hepatitis B (CHB) patients also exhibit persistently normal ALT levels (referred to as CHB-P herein) [4,10,15]. ELD patients may also show intermittently normal ALT levels, despite severe liver pathology [16,17]. The urgency of better biomarkers for assessment of disease in patients infected with HBV as well as their variable clinical presentations make this patient cohort highly suitable for evaluation of new biomarkers for assessment of liver disease.
We have previously found that serum TXNRD activity increased upon liver injury in mice [18]. Subsequently we indentified that the enzymatic TXNRD activity is counteracted by quiescin Q6 sulfhydryl oxidase 1 (QSOX1) also present in either human or mouse serum [19]. QSOX1 accounts for the major part of sulfhydryl oxidases (SOX) in serum, since inhibitory monoclonal antibodies specific for human QSOX1 largely inhibited the activity opposing the TXNRD measurements [19]. A sensitive plate-reader assay for determination of serum QSOX1 activity, based upon fluorescence measurements of Amplex UltraRed-hydrogen peroxide complexes, has been developed [20]. That method for SOX activity determinations is hence based upon detection of the enzymatic reaction product (H 2 O 2 , hydrogen peroxide). Alternatively, we found that measurements of the disappearance of an artificial thiol substrate, dithiothreitol (DTT), here referred to as DTT-oxidizing capacity (DOC), could also be used to assess the total thiol oxidation activity in serum [19]. The DOC assay is colorimetric, easy to perform, stable and inexpensive, making its methodological features attractive for clinical use. Thus, we here evaluated the potential of using either SOX or DOC activity determinations in serum as biomarkers for disease in CLDs. We focused on the use of these assays in cohorts of chronic hepatitis B patients having normal ALT levels to assess whether they are able to complement the drawbacks of low ALT levels as shown above. The results suggest that DOC, but not SOX, activity measurements can provide informative value as an additional biomarker for CLD that can become complementary to ALT determinations, especially in patients displaying low ALT values in spite of ongoing disease.
Patients and specimens
For this study, we wished to obtain samples from local hospitals of as many CLD patients as practically possible. In total, 2251 serum samples from adult donors were used in this cross-sectional study. To evaluate sensitivity of the examined biomarkers in CLDs, we recruited 1693 CLD patients from four cohorts of different clinical presentations of HBV-infected patients: 757 ELD patients under drug treatment (419 cases with intermittently normal ALT and 338 with abnormal ALT); 511 CHB patients under drug treatment (196 with intermittently normal ALT and 315 with abnormal ALT); 217 CHB-P patients; 208 ICs. The definitions of studied cohorts followed the guidelines regarding the management of chronic HBV infection [12À14]. Briefly, i) the ELD cohort was composed of HBV-associated decompensated cirrhosis, liver failure and hepatocellular carcinoma patients; ii) the CHB cohort was composed of HBV-associated chronic hepatitis patients; iii) the CHB-P cohort was composed of CHB patients under drug treatment and with persistently normal ALT for the last 12 months; and iv) the IC cohort was composed of inactive HBV carriers with normal ALT and without drug intervention. HBsAg was positive for more than 6 months in all groups. For the CHB
Research in context
Evidence before this study Alanine aminotransferase (ALT) is the most commonly used serum biomarker for clinical monitoring of liver disease despite evidence of low sensitivity and low specificity. Rather recently proposed revisions in the reference interval for normal ALT levels may further impair its use as a biomarker, as a decreased upper normal limit yields higher numbers of false positives, while still showing too low sensitivity for use in cases of liver fibrosis. Thus, there is a major and widely acknowledged need for new or additional biomarkers to aid facile assessment and monitoring of liver disease.
Added value of this study
Here we present results from an exploratory pilot study suggesting that the assessment of sulfhydryl oxidase activities in serum, determined through a new activity assay hereby named dithiothreitol-oxidizing capacity (DOC), can provide a facile and reliable biomarker that may be significantly more robust than ALT for assessment of disease in different cohorts of chronic hepatitis B patients, especially in those displaying low or normal ALT levels. It also correlated well with markers for liver fibrosis. We suggest that DOC can be evaluated as an additional biomarker together with ALT that should be of special diagnostic value in monitoring of patients displaying normal ALT levels.
Implications of all the available evidence
Based upon our findings we propose that serum DOC should be further evaluated for potential use in clinical monitoring of disease progression in chronic hepatitis B, and possibly also in other liver diseases.
patients, HBV DNA levels were above 2000 IU/mL before antiviral therapy. HBV DNA levels of ICs were under 2000 IU/mL. HBeAg was positive or negative in CHB and CHB-P patients, and was negative in ICs. To evaluate performance of the biomarkers in diseases not related to liver pathology, we recruited 77 patients with acute myocardial infarction (AMI), and 163 patients with other diseases than CLD or AMI, including 55 with stroke, 50 with diabetes, and 17 with pulmonary tuberculosis. As 318 healthy controls (HC), serum was obtained from overtly healthy blood donors having normal ALT, aspartate aminotransferase (AST), total bilirubin (TB) and direct bilirubin (DB). The number of persons with ages and gender distribution for HC donors and patients are given in Tables S1-S3. The study was approved by the ethics committees of Anhui Medical University, Anhui, China, and all study participants provided written informed consent. Samples without missing reference data of ALT, AST, TB and DB were randomly collected between March 2018 and December 2020 from three hospitals affiliated to the University in Anhui, China (the First Affiliated Hospital, the Second Hospital, and the Anhui Provincial Hospital). STARD (Standards for Reporting Diagnostic Accuracy) was used as reporting guideline.
Handling of blood samples
Venous blood samples were collected and centrifuged to obtain serum samples, which were stored at -80°C until analyses. The serum levels of ALT, AST, TB and DB were determined in the clinical routine laboratories of the local hospitals.
SOX activity assay and SOX units
The fluorescence-based assay for SOX activity was performed at 25°C according to the method of Israel et al. [20]. One unit (U) was defined as a fluorescence intensity increase by 1 per min and the resulting SOX activity was presented as U/mL serum.
DOC activity assay and DOC units
To measure the dithiol oxidation activities in serum (DOC), samples of serum (15 mL) diluted with saline (85 mL) were mixed with 50 mL of a reaction mixture containing 10 mM EDTA-Na 2 and 1 mM DTT in HEPES (200 mM, pH 7.2). For background subtraction for each sample, serum (15 mL) and saline (85 mL) were also mixed with 50 mL of a mixture containing 10 mM EDTA-Na 2 in HEPES without DTT.
The difference of these paired serum tests represented total reaction mixture thiol levels in the presence of serum. As another control to measure thiol levels of the reaction mixture in the absence of serum, 100 mL saline was mixed with 50 mL HEPES (200 mM, pH 7.2, 10 mM EDTA-Na 2 ), either with or without 1 mM DTT. The difference of these paired saline tests represented controls for total thiol levels in the absence of serum sample addition. These four reaction mixtures were made for each sample, and the DOC activity assay was thereupon performed by incubation at 37°C for 15 min. Then 200 mL Trisbuffer (200 mM, pH 8.0) containing 6.6 M guanidine hydrochloride and 1 mM DTNB was added, in order to terminate the reaction by denaturation of all proteins by guanidine hydrochloride, and to determine total thiol contents by reaction with DTNB (releasing TNB À anions with absorbance at 412 nm upon reaction with free thiols). After 5 min, and within 30 min following initiation of the assay, absorbance in each reaction was determined at 412 nm using a 96well plate reader. The extent of thiol decrease in the serum sample during the assay, thus defining its DOC activity, was calculated using the following formula.
Thiol levels in the absence of serum À Thiol levels in the presence of serum Thiol levels in the absence of serum  100% If the thiol decrease in the assay exceeded 55%, serum was diluted for redetermination. One U of DOC was defined as 1% DTT (2% thiol) decrease during the assay duration of 15 min. DOC was then presented as U/mL serum.
Statistics
Receiver operating characteristics (ROC) curves and multivariate logistic regression analysis were analyzed by SPSS (version 17.0). Other analyses were performed by GraphPad Prism (version 5.0). Differences between two independent groups were tested with the Mann-Whitney U test if the data exhibited abnormal distribution as examined with the D'Agostino & Pearson omnibus normality test. Accordingly data are presented as median with 25% percentile and 75% percentile. Data are presented as mean § range in case of two replicates. Coefficient of variation (CV) was calculated by standard deviation/mean. Goodness of fit in standard curve of SOX or DOC assay is presented as R 2 . Pearson correlation coefficient is presented as r. The more rigorous P value (less than 0.005) was considered statistically significant [21].
Role of funding source
The funders of the study had no role in study design, data collection, data analysis, data interpretation, or writing of the report. The corresponding authors had full access to all the data in the study and had final responsibility for the decision to submit for publication.
Methodological validation of SOX and DOC assay performance
For SOX and DOC assay validations, a pooled healthy human serum sample was prepared. For SOX measurements, using up to 2 mL pooled serum, increases in fluorescence intensity rates during 3 minutes of assay linearly correlated with increased serum volumes (R 2 = 0.9901, P < 0.0001) (Fig. 1A). For the readout of the DOC assay, thiol decrease during the assay linearly correlated (R 2 = 0.9863, P < 0.0001) with increasing serum volumes up to 30 mL, where 55% of the thiols in the sample had decreased, whereafter saturation of the assay was reached (Fig. 1B). Thereby all consecutive DOC assays in this study were performed so that thiol decreases during the assay remained below 55%. Determining the time-dependent decrease of thiols in the assay using 15 mL pooled serum, a good linear correlation (R 2 = 0.9915, P < 0.0001) was maintained up to 30-min of incubation (Fig. 1C). Based on this result, 15-min reactions were used for the remainder of this study. To assess the extent of intra-assay variation, the same pooled serum sample was used in six technical replicates, whereby the CV for the SOX assay and DOC assay were found to be 8.3% and 4.6%, respectively (Fig. 1D). For assessment of interassay variations, the same pooled serum was determined once daily at 20 different days; the CV of the SOX and DOC assay were in this case 12.6% and 6.0%, respectively (Fig. 1D). These results showed that the DOC assay accuracy and stability was higher than for the SOX determinations. For both assays, the pooled serum was however used as reference sample for all subsequent measurements of SOX and DOC activities in this study, with only data obtained from assays showing similar values for the reference sample ( §5%) deemed reliable and used for the results as determined and analyzed herein.
DOC is a promising biomarker for disease monitoring in hepatitis B patients with normal ALT levels
First we analyzed serum samples from healthy controls and hepatitis B patients with clinically confirmed ELD, CHB, CHB-P, or ICs, where all of the patients had normal ALT values (< 40 U/L) ( Fig. 2A).
Using these samples, SOX and DOC activities were determined and compared as biomarkers for disease with AST, DB or TB using ROC curve analyses. It should be noted that the ELD patients had clinically verified irreversible pathological alterations in the liver despite normal ALT levels. The AUC value for DOC in the ELD patients was 0.983, which was significantly higher than that of any of the other examined serum parameters including SOX (0.848), AST (0.865), TB (0.754) and DB (0.808) (P all < 0.0001) (Fig. 2B). The AUC value for AST (0.865) suggests that AST could have some complementary value in this patient cohort, in spite of the lack of abnormal ALT. However, in CHB patients (Fig. 2C) that present less clinical severity, AST had an inadequate AUC value (0.648). So did SOX (0.614), not alone TB (0.528) and DB (0.497). However, DOC was still performing well as a biomarker also in CHB, with an AUC value as high as 0.956 (Fig. 2C), which was significantly superior to all the other potential biomarkers that were examined (P all < 0.0001). DOC also possessed informative AUC values of 0.852 and 0.844 in CHB-P and ICs, respectively, while again the other biomarkers had lost any diagnostic value (Figs. 2D, E). Based on multivariate logistic regression analysis with gender and age as covariates as well as healthy persons as the control, we found that DOC was an strong independent biomarker, with odds ratio values of 12311, 17777, 461 and 493 in subgroups of ELD, CHB, CHB-P and ICs, respectively (P all < 0.0001), while the odds ratio values of the other biomarkers (SOX, AST, TB and DB) were at the range of 0.5-1.7 (Table 1).
Assessment of sensitivity at 95% specificity in serum from hepatitis B patients with normal ALT levels
Although the hepatitis B patients analyzed above were selected for normal ALT levels, the average ALT levels in the four subgroups were still significantly increased compared to the HC group (P all < 0.0001) ( Fig. 2A). We therefore performed repeated analyses for the other biomarkers analyzed here, showing that also DOC, SOX and AST had significantly increased average levels in these patient groups (Figs. 3A-C), but not TB or DB (Figs. 3D, E). Using these data to define limits for 95% specificity, the cutoff values for clinically validated healthy persons (HC) were 2.62 U/mL (DOC), 50.2 U/mL (SOX), 28 U/mL (AST), 23 mmol/L (TB), and 8 mmol/L (DB) (Fig. 3). Using those values as the upper limit for normal, the fraction of the ELD subgroup having higher values than that (referred to herein as sensitivity) was 93% for DOC, whereas none of the other biomarkers had more than 63% (Fig. 3). In the CHB subgroup, the sensitivity for DOC was 78%, whereas the sensitivity for the other biomarkers was no more than 25% (Fig. 3). In the CHB-P and ICs subgroups, presenting with the least severe disease, sensitivity for DOC was 55% and 56%, respectively, while the sensitivity of the other biomarkers was no more than 15% and 17%, respectively (Fig. 3). These results suggest that DOC can be a promising biomarker for assessment of disease in hepatitis B patients presenting with normal ALT values.
DOC exhibits the lowest variation among examined biomarkers
Variation of DOC in healthy people was narrow (CV = 9.4%) (Fig. S1B); whereas variations of the other examined biomarkers including SOX in healthy people were large, among which the narrowest was AST with a CV of 23.1% (Fig. S1D). For reference, the CV of ALT was 37.5% (Fig. S1A). The narrow variation of DOC in healthy people suggests that the homeostasis of DOC is highly regulated. A modest elevation over its cutoff value (2.62 U/mL) may thus potentially indicate hepatic disorder. On the other hand, variation of DOC in each CLD subgroup with normal ALT was also the smallest among tested biomarkers (Fig. S1G). The profile of narrow variation in both healthy persons and each subgroup of the CLD patients with normal ALT contributes to a probability for higher diagnostic accuracy. Since DOC outperformed SOX as a potential biomarker, we next focused on further validating DOC for such use.
DOC in hepatitis B patients does not correlate with ALT
It has been demonstrated that ALT levels under the revised ULN can not be used to exclude liver disease [5,10]. We thus further analyzed the hepatitis B patients with ALT levels below the revised ULN with regards to their DOC values. For male and female patients, AUC values of DOC were 0.979 and 0.985 (ELD), 0.951 and 0.945 (CHB), Table 2). It should again be noted that ALT levels were low and, by definition, normal in all these patients (<40 U/L), but it was possible that it may still correlate with the DOC values. However, we found that there was an evident lack of correlation between ALT and DOC in all subgroups of these patients (all Pearson r < 0.25), while all presented with a dominance of abnormally high DOC levels irrespective of their ALT levels within this analyzed range (Fig. 4).
We next asked whether DOC correlated with ALT using samples from CHB-associated ELD or CHB patients presenting abnormally high levels of ALT, and assessed the performances of DOC and the other potential biomarkers analyzed here in these patients. Table S4 shows the ROC analyses of the serum markers in CHB-associated ELD or CHB patients having abnormal ALT values. The AUC for DOC in ELD or CHB patients reached 0.996 and 0.987, respectively, which were significantly higher than those of SOX, TB and DB, and non-significantly higher than that of AST, an indispensible biomarker for CLDs. Interestingly, analyzing the possible correlation between DOC and ALT in the samples of ELD or CHB patients presenting abnormally high ALT levels, again we found no evident correlation between the two biomarkers, while the DOC was still elevated above normal for a vast majority of the samples (Fig. S2). This shows that the DOC and ALT should be considered two independent biomarkers for liver disease, as they display no covariance with each other in the patient cohorts studied here.
Assessment of DOC and ALT as biomarkers in AMI and other diseases
ALT values are often increased in AMI patients [22,23]. We thus next compared the performance of DOC and ALT as biomarkers in AMI patients. The AUC value of a ROC analysis for ALT in AMI was 0.881, which was significantly higher than that for DOC (0.612, P < 0.0001) (Fig. 5A). This shows how ALT increases in serum also correlate well with AMI, as is well known, but that DOC does not seem to be affected to a major extent by AMI. We also examined DOC performance in other patient groups, where ALT is typically not increased, such as stroke, diabetes and pulmonary tuberculosis, revealing similar AUC values for ALT and DOC of 0.514 and 0.566, respectively (P = 0.2253, Fig. 5B). These results suggest that DOC in serum, similar to ALT, is not affected to any major extent in these other diseases. Gender is another confounder affecting ALT sensitivity, with ALT levels generally being higher in healthy males than in healthy females [7], as also found here (Fig. 5C). Unlike ALT, gender seemed not to affect the DOC levels in healthy donors (Fig. 5D).
Discussion
We have here found that the new assay for DOC measurements yields a serum biomarker for liver disease that shows good promise for use in chronic hepatitis B patients, even irrespective of their ALT values. Most important, with ALT often being normal in several cases of chronic hepatitis, especially in cases with major fibrosis, the DOC biomarker may prove be of significant clinical value as an additional biomarker together with ALT, and should thus be further evaluated for use in monitoring of such patients.
A systematic review with meta-analysis found that approximately one fifth of CHB-P patients has significant hepatic fibrosis, based on liver biopsy findings [10]. ICs account for the largest subgroup among HBV-infected individuals. Up to 30% of all IC patients are likely to undergo spontaneous reactivation of hepatitis B, with increased risk of progressive liver injury or hepatic decompensation [24,25]. Given that we found over 30% sensitivity at 99% specificity for CHB-P or ICs with DOC, and often low ALT levels in these patient groups, DOC could potentially be a powerful biomarker for screening and continued follow-up. If most of these patients with abnormal DOC have been at or finally develop into liver fibrosis stage, progressive liver injury and inflammation, or hepatic decompensation, DOC should be a useful biomarker for predicting long-term outcomes in these two subgroups. Many CHB or HBV-associated ELD patients have normal ALT; however, most of them have abnormal DOC. If a good prognosis is associated with persistent decrease of DOC and visa versa, DOC should be a useful biomarker for predicting long-term outcomes in these two subgroups. Overall, DOC provides a tool for better monitoring of disease, especially in patients exhibiting normal ALT, although DOC cannot discriminate different subgroups of HBV carriers. Future studies focusing on the possible association of DOC with fibroscan and histological score, the gold standard of liver fibrosis, will be warranted. A major concern is whether the high sensitivity of DOC found in the hepatitis B patients also exhibits in other diseases outside of CLDs. We found that DOC was not elevated in the patients with nonliver diseases analyzed here, and it was still insensitive as biomarker in AMI patients where ALT is easily increased, but there may be other diseases or conditions where DOC can be elevated in the absence of liver disease. This should be studied further.
The lack of correlation between DOC and ALT suggests that the two biomarkers increase in serum by different mechanisms in patients having CLDs. It was demonstrated earlier that the enzyme QSOX1 accounts for the major part of human serum thiol oxidation activity, since inhibitory monoclonal antibody specific for human QSOX1 largely inhibits this activity [19]. QSOX1 can be efficiently secreted from mammalian cells and the processing of QSOX1 within the Golgi apparatus affects its secretion [26]. A highly conserved Nlinked glycosylation site is required for QSOX1 secretion from mammalian cells [27] and quantitative proteomics have revealed that serum QSOX1 gradually increases with disease advancement caused by HBV [28]. The physiological function of extracellular QSOX1 has also been characterized and suggested to relate to remodelling of collagens in the extracellular matrix.
[29À31] Moreover, the production of hydrogen peroxide by QSOX1 in the extracellular space may trigger inflammation [32]. With modulation of extracellular matrix being a hallmark of liver fibrosis and with hepatic stellate cells and portal fibroblasts being important sources of matrix proteins in hepatic fibrosis [33,34], it is possible that secreted QSOX1 is related to physiological responses to CLD. An important question is why the DOC assay shows better performance than the SOX assay, as both assays measure QSOX1-like activity. One reason may be H 2 O 2 metabolizing enzymes in serum such as catalase or glutathione peroxidase (GPx3) that could potentially show activity in the SOX assay. Such H 2 O 2 metabolizing enzymes would, however, not interfere with the DOC assay, which could thereby explain its better performance compared to the SOX assay.
In the present study, the median age in ELD subgroup was significantly higher than in the HC group, which may potentially be a limitation. However, we found no evident correlation between DOC and age in any examined groups (Pearson r: -0.13-0.17), and neither were the DOC levels in healthy persons affected by gender. Still, the DOC assay needs to be further evaluated in other diseases than those studied herein, including liver diseases and non-liver diseases. A limitation of DOC is that it can not discriminate different subgroups of HBV carriers. Other limitations to our present study include the lack of information on fibrotic scores or histopathological findings in Cutoff values of the HC at 95% specificity were 2.62 (U/mL, DOC), 50.2 (U/mL, SOX), 28 (U/L, AST), 23 (mmol/L, TB) and 8 (mmol/L, DB). Abnormal indicates the rate over the cutoff value. Q, quartile; DOC, dithiothreitol-oxidizing capacity; SOX, sulfhydryl oxidases; ALT, alanine aminotransferase; AST, aspartate aminotransferase; TB, total bilirubin; DB, direct bilirubin; HC, healthy controls; ELD, end-stage liver disease; CHB, chronic hepatitis B; CHB-P, CHB with persistently normal ALT levels; ICs, inactive carriers. correlation to DOC values in individual patients. Such potentially important determinants underlying increased DOC levels should be addressed in forthcoming studies, as should the molecular mechanisms of DOC secretion and, finally, the possible biological functions of this enzymatic activity in serum. Based upon the results of this exploratory pilot study, we suggest that DOC should be considered as a new seemingly reliable complementary biomarker for monitoring of disease in chronic hepatitis B patients that can be used together with ALT for improved diagnostic power. Additional studies to corroborate the finding are needed prior to its application.
Data sharing statement
All data generated or analyzed during this study are included in this article and its supplementary material files. Further enquiries can be directed to the corresponding authors.
Contributors
JZ conceived and supervised the study, wrote the initial draft, and coordinated with all other co-authors in writing of the final version of the manuscript. LY and KZ contributed experimentally by measuring the DOC and SOX activities in all serum samples, and assisted in the writing process. YZ, ZL, TH, XZ, LL, and ZZ: contributing clinicians with patient contacts, responsible for clinical data, and overseeing collections of serum samples. In addition, ZZ participated in data analysis and interpretations. EA contributed by discussing experimental layout and interpretations, and helped writing the manuscript. JZ and ZZ accessed and were responsible for the raw data associated with the study.
Declaration of Competing Interest
JZ has a Chinese Patent associated with measurement of total thiol-oxidizing capacity in serum. The other authors declare no competing interests.
Funding
This study was funded by the National Natural Science Foundation of China (Grant numbers: 31771971 and 32001013).
Supplementary materials
Supplementary material associated with this article can be found in the online version at doi:10.1016/j.eclinm.2021.101180. | 2021-11-01T15:09:36.768Z | 2021-10-30T00:00:00.000 | {
"year": 2021,
"sha1": "2d007733b3e72d9b3f088332774fb4ad16c16a70",
"oa_license": "CCBY",
"oa_url": "http://www.thelancet.com/article/S2589537021004600/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ff8d1b312d545b7a1183ea06282d002cad8deba8",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
238254617 | pes2o/s2orc | v3-fos-license | Editorial: MILD Combustion: Modelling Challenges, Experimental Configurations, and Diagnostic Tools
Over the last years, particular attention has been paid to combustion regimes that are able to ensure stable, complete, and efficient combustion, together with a strong reduction of pollutants, such as CO, NOx, and soot. Moderate or Intense Low oxygen Dilution (MILD) combustion (Cavaliere and de Joannon, 2004) has gathered increasing attention in recent years, as it ensures very high combustion efficiencies with very low pollutant emissions, compared to conventional combustion regimes, due to the reduced temperature peaks and macroscale homogeneity, achieved by means of high recirculation of exhausts in the reaction volume. Such a combustion regime shares similarities with other combustion concepts, such as flameless combustion (Wünning andWünning, 1997), high temperature air combustion (HiTAC) (Katsuki and Hasegawa, 1998), and colorless distributed combustion (CDC) (Arghode and Gupta, 2010). The acronym MILD will be retained for the rest of the present editorial. MILD combustion has been implemented in several furnace-based power generation and manufacturing applications; however, its extensive application is still partially hindered by the limited understanding of many facets of underpinning elementary processes and their peculiar interplay in this combustion mode. One of the major peculiarities derived from the change of the elementary processes is related to the very strong interactions between turbulence and chemistry occurring in this regime. In MILD combustion, two contrasting effects emerge: Damköhler number is of the order of unity or less, whereas the turbulence level is very high, to ensure effective gas recirculation andmixing at themicroscale. These two characteristics lead to a new paradigm of the combustion process where no flame fronts are visible and the combustion process covers an extensive part of the combustion chamber. It is evident that some principles, well consolidated for a standard combustion process, are no longer applicable in MILD combustion. For example, many common combustion model assumptions, such as the infinitely thin reaction zone, cannot be made in MILD conditions. Analogously, consolidated kinetic models are not able to accurately reproduce experimental data at least in some relevant working conditions. To overcome these issues, many effective tools for studying and designing MILD combustion processes and systems with novel conceptual approaches have been developed over the years. However, consolidated combustion models and a consistent experimental database are still required to identify MILD combustion regimes and to effectively represent chemical kinetics, heat transfer, turbulence–chemistry interactions, and other processes in this peculiar regime. Edited and reviewed by: Timothy S. Fisher, University of California, Los Angeles, United States
The objective of this research topic is to highlight and discuss open issues, opportunities, and new findings in MILD combustion, with a focus on modelling approaches, experimental configurations (available and under development), and the critical assessment of existing diagnostic tools.
The research topic has two main cores. The first is represented by reviews on key issues faced over the years since the formal definition of MILD combustion in the early 2000s. This part represents an excellent reference summary for an exhaustive overview of this process, with the accent on the open questions.
Indeed, the review by Sabia and de Joannon highlights the critical effects of the high dilution level on the chemical kinetics of fuel oxidation, focusing on the role of diluent species. Based on the literature data, they show how the overall reduction of reaction rates due to dilution stresses the competition among different kinetic paths and brings out very peculiar behaviors, previously undetected, that help better understand chemical kinetics and the role of third body effect in MILD and standard combustion conditions.
Li and Parente thoroughly review the application of reactorbased models to the simulation of a canonical MILD combustion system, the jet in hot co-flow (JHC). The effectiveness of the partially stirred reactor and eddy dissipation concept combustion models in the context of Reynolds Average Navier-Stokes (RANS) and large eddy simulation (LES) is assessed. The importance of taking into account finite rate chemistry effects and of providing a reliable estimation of the characteristic time scales is underlined.
Reduced reaction rates and the mixing between diluted and/or preheated reactants in MILD combustion locally induce the formation of peculiar reaction structures which are reviewed by Sorrentino et al. They focus on the ignidiffusive structures and their "distributed ignition" nature. The authors analyze the main characteristics of such structures in the mixture fraction space, namely, the thickness of the oxidation structures, the presence/absence of a pyrolysis region, and the loss of correlation between the maximum heat release rate and the stoichiometric mixture fraction.
The peculiar characteristics of such a process also impact the formation of nitrogen oxides (NOx). Iavarone and Parente provide an evaluation of the possible kinetic pathways active in MILD conditions and outline suitable modelling approaches to predict NO x emissions in CFD simulations. An assessment of the performances of selected models in estimating NO x formation for lab-scale MILD combustion burners is then presented, followed by a discussion about relevant modelling issues, perspectives, and opportunities for future research.
Heat transfer plays a particular role in MILD combustion. Sorrentino et al. highlight the role of heat transfer in the combustion peculiarities of MILD reactors. In particular, the thermal behavior of these systems is analyzed to stress the distinctive role of heat losses, the relative contributions of both the convective and radiative terms, and their influence on MILD macroscopic features.
The experimental study of MILD combustion requires the establishment of new experimental configurations and diagnostic methodologies. Medwell and Evans review a number of optical diagnostic techniques (Rayleigh and Raman scattering, planar laser-induced fluorescence, coherent anti-Stokes Raman scattering ( CARS), and spectroscopy) for the characterization of the MILD combustion of gas and liquid fuels in the JHC.
Chinnici et al. discuss the hybridization of MILD combustion with renewable sources, reviewing the numerical work on a hybrid solar receiver combustor (HSRC), coupling a MILD combustion burner with a concentrated solar radiation receiver. The authors analyze the efficiency of the system as a function of the solar radiation contribution, indicating the requirements in terms of the reactor dimension to reach an appropriate coupling efficiency.
The second core of the research topic is represented by articles focused on original research on different topics under discussion in MILD combustion, covering the fundamental understanding of the process, its numerical modelling, and the identification of optimal reactive scalars to assist experimental diagnostics.
Swaminathan relies on recent direct numerical simulation (DNS) data to show that a revised theory involving at least two chemical timescales is required to describe the inception of MILD combustion and describe the strong interactions between autoignition and flame propagation. Moreover, the relevance of MILD combustion to supersonic combustion is explored theoretically, providing qualitative support using experimental and numerical Schlieren images.
Sidey-Gibbons and Mastorakos analyze the critical phenomena in MILD combustion using an asymptotic theory for extinction conditions of non-premixed flames and well-stirred reactors. Results of the analysis suggest that MILD combustion systems do not show sudden ignition and extinction behavior, and therefore exhibit a smooth, stretched S-shaped curve rather than a folded one with inflection points, thus providing a potential alternative definition of MILD combustion.
Ferrarotti et al. investigate the correlation between the heat releaser rate (HRR) and species mole fractions and net reaction rates in the JHC, suggesting that typical markers (O, OH, and OH*) correlate fairly well with HRR, but improved correlations can be achieved with appropriate species mole fraction combinations, particularly for the MILD region of the flame.
Goktolga et al. present direct numerical simulations of igniting mixing layers, considered representative of the JHC configuration, using both detailed chemistry and the multistage flamelet-generated manifold (MuSt-FGM) approach. Results indicate that the MuSt-FGM approach can predict the ignition delay time fairly well, while it overpredicts the average heat release rate.
Amaduzzi et al. benchmark the flamelet-generated manifold (FGM) approach with a reactor-based model, the partially stirred reactor (PaSR), for a MILD system with internal recirculation. The results show that the FGM model strongly overpredicts temperature profiles in the reactive region while yielding better results along the central thermocouple. The PaSR closure with a dynamic estimation Frontiers in Mechanical Engineering | www.frontiersin.org October 2021 | Volume 7 | Article 726633 of the mixing constant is found to provide improved results for both lateral and central thermocouple measurements. A flame index analysis indicates how the FGM model predicts a typical non-premixed region after the injection zone, contrary to the experimental observation. Perpignan et al. present a novel approach for the automatic generation of chemical reactor networks (CRNs) from simplified CFD simulations, for the subsequent evaluation of pollutant emissions. Data from a non-premixed burner fuelled with CH 4 at various equivalence ratios are used for this purpose. The CRN results are capable of reproducing the non-monotonic behavior with an equivalence ratio, which cannot be captured by simplified CFD simulations. However, the agreement between experimental and predicted NO x emissions is not fully satisfactory, indicating a need for improving the clustering step in the CRN generation process.
In conclusion, the research topic provides a comprehensive overview on the key features and existing challenges in MILD combustion, highlighting the current research efforts and the opportunities ahead. The unique combination of review and original research articles makes it a key collection for researchers and practitioners starting or already in the field.
AUTHOR CONTRIBUTIONS
All authors listed have made a substantial, direct, and intellectual contribution to the work and approved it for publication. | 2021-10-04T13:16:30.639Z | 2021-10-04T00:00:00.000 | {
"year": 2021,
"sha1": "c88cb85b7cc56b5801a1ce77c0fc6523d566fd2c",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fmech.2021.726633/pdf",
"oa_status": "GOLD",
"pdf_src": "Frontier",
"pdf_hash": "c88cb85b7cc56b5801a1ce77c0fc6523d566fd2c",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": []
} |
149093151 | pes2o/s2orc | v3-fos-license | Counterfactual Communities: Strategy Games, Paratexts and the Player’s Experience of History
The genre of history strategy games is a crucial area of study because of what is at stake in the representation of controversial aspects of history in popular culture. Previous work has pointed to various affordances and constraints in the representation of history, based on the framing of the game interface, the alignment of goals with certain strategies and textual criticism of the contents of the games. In contrast, this article examines these games from the perspective of the player’s experience of play in relation to a wider gaming community. It is in these counterfactual communities that players negotiate their individual experience with their knowledge of the history that is presented in the games that they play, indicating that the relationship between digital games, players and history is highly contextual. The relevant practices of players of history strategy games are illustrated with examples from the official and unofficial communities of the Paradox Interactive games Europa Universalis II and Victoria: Empire Under the Sun. The shared paratexts demonstrate how positions are negotiated in relation to the ‘official’ version of history presented in the games. These negotiations are made tangible through the production and sharing of paratexts that remix the official history of the games to include other perspectives developed through counterfactual imaginations. These findings indicate the importance of including perspectives from gaming communities to support other forms of analysis in order to make rigorous observations about the impact of digital games on popular history.
Introduction
Digital games are now a key element in the circulation of history in popular culture, which takes many forms, from Downton Abbey (ITV Studios, 2010-2015) and Outlander (Left Bank Pictures, 2014-), through to Dunkirk (Nolan, 2017) and digital games like Medal of Honor (Danger Close Games, 2010).Even though these texts are understood as ' entertainment', the questions of 'which history' may still arise.While Christopher Nolan's film Dunkirk was met with almost universal acclaim, concerns were raised in India about the lack of Indian soldiers in the film, especially considering the key role that they played in the real evacuation (Bhushan, 2017).While Nolan's intention is to focus on individual experiences, one consequence of this approach is to reinforce a predominantly white, Imperial version of British history.Similar issues arise when digital games make use of popular history.Often their perspective on history is a markedly narrow one, shaped by the key markets of many digital games being in the global 'North'.
However, many very popular digital games approach history in a more abstract way.
Rather than using history as a 'period' setting in the manner of Ubisoft's blockbuster Assassin's Creed (Ubisoft Montreal, 2007), these games grapple with history as a process (Elliot & Kapell, 2013).Among the most prominent are the games of the critically acclaimed Sid Meier's Civilization series (MicroProse/Firaxis, 1991-), which has sold over 33 million copies (Nunneley, 2016) and is widely respected in the games industry.
The accolades the series has accumulated include Sid Meier's Civilization IV (Firaxis, 2005) being named the second-best PC game of all time by IGN (Adams et al., 2007).
But presenting history in an abstract manner creates other problems.While simulations are necessarily abstract (Frasca, 2004), what is significant are the elements that are retained or removed.In the case of digital games, there is also a demand that the simulation be ' entertaining'.Thus various elements of history may be emphasized as activities, particularly activities which will lead to success in the game.For example, the Sid Meier's Civilization series privileges particular approaches to technological development, the use of natural resources and relationships between cultures.
Such expectations often reflect the cultural and historical assumptions of the game designers (Ford, 2016;Mir & Owens, 2013;Mukherjee, 2017).It is entirely reasonable to observe that digital games like the Sid Meier's Civilization series are based on Eurocentric and colonial assumptions.However, it does not necessarily follow that through playing these games the players internalize these assumptions, nor that the process of abstracting real events into an interactive simulation means that digital games are unable to represent history in a meaningful way.This article examines historical strategy games from the perspective of the player's experience in relation to the wider gaming community.Players negotiate their individual experience through these communities, sharing their knowledge of history, counterfactual creations and strategic approaches to the games they play.These counterfactual communities illustrate that the alignment with, negotiation of and resistance to dominant paradigms of history are not necessarily found in the games themselves, but that they are palpable in the actions of players, and the communities of practice they establish.These practices suggest that the relationship between digital games, players and history is contextual.By understanding this context, a better perspective on the player's experience of the game may be developed, one which can inform the analysis of a game's algorithmic structure.This issue will be explored through a discussion of two games and their online communities: Europa Universalis II (Paradox Interactive, 2001) and Victoria: Empire Under the Sun (Paradox Interactive, 2002).The discussion will focus on how the two games represent a specific moment: the colonization of Australia.This analysis is relevant to other historical strategy and grand strategy games.The article begins by reviewing prior scholarship on digital games that represent history as a process.
Then, after briefly outlining some core elements of Paradox Interactive and their signature games, the representation of the Australian colonial moment in each game is discussed.The final section examines the practices of the games' communities, arguing that these practices demonstrate that some players use these games reflexively as counterfactual tools for thinking creatively about official versions of history.
Past Approaches
A rich vein of scholarly work that focuses on the representation of history in digital games is now established (Chapman, 2016;Elliot & Kapell, 2013;Whalen & Taylor, 2008).This includes work that examines the use of historical settings in digital games, both games that assiduously maintain correct historical detail, such as the Medal of Honor series (Dreamworks Interactive, 1999-), and games that explore historical fantasy, such as Bioshock (2K Australia, 2007).Other scholarship focuses on digital games that 'play with' history, in the sense that they involve the player in an account of the past in which the player's actions impact what develops through the passing of time.The most prominent series of games in this style is the Sid Meier's Civilization series, which is also the subject of key scholarly work that explores and critiques how digital games are able to represent history (Galloway, 2006;Wark, 2007;Chapman, 2013).The series is notably abstract in contrast to similar games that include historical actors, events and geographies as core designed elements of the player experience.Galloway, in Gaming: Essays on Algorithmic Culture, constructively (2006) divides discussion of the Sid Meier's Civilization series, which is split into three nodes: 1.The ' cultural critique': games are too trivial to be discussed seriously The following discussion focuses on examining the ideological and informatic critique of games that portray historical processes.
Ideological critique
A key problem of the Sid Meier's Civilization series, and many other games that deal with history as process, is the oversimplified manner they represent colonialism.The abstract way in which they deal with the relationship between the colonizer and the colonized is potentially controversial, considering the systemic acts of violence underpinning the expansion of European power in the Americas, Africa, the Middle East, South and East Asia, and Asia Pacific.Scholars have argued that the Sid Meier's Civilization series only represents Western style development (Caldwell, 2004: 50), ascribes little importance to indigenous cultures (Douglas, 2002: 27) and -in the case of games representing history in general -overemphasizes the role of the military (Crogan, 2003: 279).However, these criticisms can be drawn together under the rubric of Friedman's key criticism of Sid Meier's Civilization II: that the game simultaneously denies and de-personalizes the violence in the history of ' exploration, colonization, and development' (1999: 145).
Underpinning this argument is the perception that through co-producing an ideologically loaded 'text' the player tacitly accepts the paradigm portrayed by the game.A vein of scholarship that focuses on the representational capacities of the software has suggested that this encourages acceptance of the game's ideology (Douglas, 2002: 24; see also Friedman, 1999: 136).The concern is that even in highly reflexive communities, the ideological implications of the game may remain obscured (Douglas, 2002: 28).This scholarship crucially pinpoints the significance of this genre of digital games; through a close relationship between the players' actions and software-defined parameters, a 'popular history' is produced.
Other research on digital games suggests that learning and mastering the rules can lead to reflection on the rules (Wark, 2007;Salen, 2008;Krapp, 2011).
Consequently, players may recognize and subvert the ideologies that the rules may reflect (Apperley, 2010;Gee, 2003: 176;Everett, 2005: 318-319;Uricchio, 2005).The practice of playing a digital game notionally involves accepting software-enforced rules and structures for action, but this does not necessarily involve accepting the ideologies embedded in the software.Scholarly work specifically on Sid Meier's Civilization II notes the game's potential for encouraging a 'skeptical, critical attitude' in the player (Stephenson, 1999: 4) that may expose the player to the ' arbitrariness of ideologies of nation and culture' (1999: 1).
In order to demonstrate how players and gaming communities unpick this arbitrariness, I will turn to the notion of counterfactual history to argue that historical strategy games are used to explore counterfactual histories (Apperley, 2013;Chapman, 2016;Shaw, 2015).They act as tools for cultivating what I have described elsewhere as the ' counterfactual imaginary' (Apperley, 2013: 190), a creative process where players use historical games to negotiate the terrain of mass media popular history according to their own predilections (see also Hammar, 2016).The process and product of play in this case is a personal expression that remixes everyday popular history which creates scope for an expression of identity that challenges the hegemony of official history.This is a key part of the enjoyment of such games.For example, the remixing of history was used prominently in the promotion of Rise of Nations (Big Huge Games, 2003); one 2003 print advertisement proclaimed: 'Where here were you during the Roman missile crisis?' Part of the entertainment value for players is that these games have the potential to diverge from recorded events (Atkins, 2003: 89), a value that Big Huge Games clearly references.By granting players the creative license to ask 'what if?' (Atkins, 2003: 94), players are invited to make the game a site for imagining counterfactual scenarios (Atkins, 2003: 102-103).
Here, games provide what Serious Game design pioneer Gonzalo Frasca describes as a space for players to explore different possibilities in their own 'personal and social realities', one that is open to 'multiple', ' alternative ' perspectives (2004: 97).
Informatics critique
The core of Galloway's argument is not that digital games such as Sid Meier's Civilization III do not have ideological leanings, but rather that any ideologies present are of marginal relevance.He states: To use history as another example: the more one begins to think that Civilization is about a certain ideological interpretation of history (neoconservative, reactionary, or what have you), or even that it creates a computergenerated "history effect", the more one realizes that it is about the absence of history altogether, or rather the transcoding of history into specific mathematical models.( 2006 I suggest that Wark's and Galloway's concerns with the notion of allegory have considerable overlap.Wark's argument is congruent with Galloway's in that his concern is with the quantifying -and homogenization -of all differentials.The allegorical representation of the control society empties the ideology of the historical representation, as every factor of potential difference becomes simply a variable in or input into an algorithm.Chapman ( 2013) mounts an important critique of this position, arguing that the process of historical representation, even by expert historians, is always a process of reduction and simplification.This article's concern is less historiographical than that raised by Chapman.
While the games themselves may have reduced ideological positions to selections from drop-down menus, the work of 'making meaning' of these games does not only take place within algorithmic constraints; rather, it is also situated in relation both to a community of players and the circumstances of the individual.Previous literature that indicates digital games have been used as historical tools for exploring counterfactual imaginations suggests fertilization and crossover between the informatic (or algorithmic) and the ideological elements of historical strategy games, which this article will explore through the player practices and player communities of the Paradox Interactive games Europa Universalis II and Victoria: Empire Under the Sun.
Paradox Interactive
The company Paradox Interactive is responsible for designing and publishing Europa Universalis II (2001), Victoria: Empire Under the Sun (2002) and many other digital games.Based in Stockholm, Sweden, the company is probably the best-known developer of 'grand strategy' games.It is known for having a loyal customer base, and has become very profitable; for example, in 2015, Paradox Interactive made over 74 million US$ in profits (Brunozzi, 2016).Its stock market launch in 2016 valued the company at 420 million US$, and its shares traded on NASDAQ for 3.96 US$.
The company has achieved this considerable success with a relatively small stable of games.Notable games include the Europa Universalis (1999-) and Hearts of Iron (2002-) series.Both of the initial releases garnered substantial critical acclaim and positive audience reception, despite being considered both extremely complicated and graphically unimpressive (Osborne, 2002;Parker, 2001).Paradox Interactive was able to capitalize on this success to produce sequels of greater quality, having incorporated many suggestions from players into the redeveloped games.
Loyal players eagerly anticipated the updated versions of the games and promoted their release among the wider games community.Paradox Interactive also diversified quickly, first moving into publishing games (Calvert, 2004) Europa Universalis II spans the era 1420 to 1820.The main focus of the game is the expansion of the European powers to dominate trade and create colonies around the globe.While technology, budgeting, diplomacy and military concerns are all important in the game, they are ancillary to the concern of colonialism; the player selects a country for which they will play.Initially, this is one of the European colonial powers: England, France, Portugal, the Russian Empire, Spain, and so on.However, a key innovation of the game is that it actually will allow the player to select any country from around the world, including non-Western and tributary nations.But within the programmed parameters of the game, it is almost impossible for the other playable nations to win.
However, the game community has an important role in assisting players to set goals.Feasible goals for the smaller, less powerful nations are discussed, contested and developed through a mostly generative discussion within the community of players.One example of a community negotiated 'win' is the player of Oman being considered to have 'won' if they still retained control of Zanzibar in 1820.The community also places caveats on the actions of stronger nations.For example, because of its relative security within Europe and its large number of explorers, Portugal can 'spam' settlers across North and South America as well as Africa, creating a Portuguese new world with relative ease.In order to avoid exploiting these unbalancing advantages, players may choose to only colonize areas that were historically colonized by Portugal, such as the east coast of the South American continent.In any case, winning the game outright by becoming the strongest power is extremely challenging even for the most dedicated players, encouraging community goal setting as a strategy for negotiating the relative success of the myriad potential sub-optimal outcomes.
Victoria: Empire Under the Sun covers the period from 1820 to 1920.While it has a similar aesthetic to Europa Universalis II, it deals with issues including commerce, economy, diplomacy, technology and politics in a considerably more detailed fashion.While colonization remains an aspect of the game, the urge to colonize is driven by industry, meaning that the nations of Europe scramble to gain control over areas producing essential raw materials (coal, lumber, steel, sulfur, etc.).Like Europa Universalis II, any country can be chosen for play; however, Victoria: Empire Under the Sun has a more formal mechanism for assigning value to success through victory points that are calculated at the end of the game.This means that several non-European nations (China, Japan and Persia) have a good chance to perform well in the points system.Countries are ranked according to their total victory points obtained during the game, and the player-communities have established ways of evaluating performance based on the ranking that a country achieves in the game compared to how hard that country is to play.For example, to finish the game with Brazil in a top eight position is considered a victory (Anderson, 2004).
Maps, Colonies, Genocides
Both games present their interfaces as a map, or rather, as several maps which detail various aspects of management (of resources, transportation networks, religion, and so on).The perspective is often as if the player were surveying the map from a tab-letop, revealing the genre's roots in board and war games (Apperley, 2006: 13).The player is at a distance from the field upon which they will be acting, rather than located within or adjacent to the screen through an avatar or a gun-sight.Thus the player is located as an ' outsider' to the game-world; one who retains the ability to act upon that world, but from a distance, hence this genre is sometimes referred to as 'God Games'.The map is presented at a level of detail that represents strategic military concerns (Kontour, 2012), which reflects the player's role in the game: that of a military/economic machine that typifies military despotism.
At the start of play of Europa Universalis II, Australia, and the waters around it, are obscured.They are presented as empty spaces waiting to be filled in through exploration.During the course of play only the continent's eastern and western coastal provinces can be explored; the center, south and north of Australia cannot be mapped.However, in Victoria: Empire Under the Sun, Tasmania, Victoria and New South Wales are already a part of the British Empire and there is a significant British presence in the rest of Australia.As a result, it is likely that during play the whole of Australia will become a British colony, and eventually a self-governing dominion.
The strategic importance of Australia in these games revolves entirely around the continent's apparent suitability for colonization.In Europa Universalis II, there is likely to be a race between the seafaring powers to be the first to ' discover' Australia, send colonists and eventually form official colonies.While much of the world of Europa Universalis II is available for European expansion, most of it is unsuitable for colonization by Europeans, leaving locations like Australia critically important for developing colonial empires, while the other regions are best left alone to be dominated by trade.In Victoria: Empire Under the Sun, Australia is of little importance to Britain, as it produces no goods that are not already available from the home isles.
Furthermore, the colony does not attract many settlers because the algorithm has a bias towards sending unoccupied populations to the USA; this means that even as a self-governing dominion, Australia will remain a relative backwater without the human resources to contribute to the empire's armies and economy.
One of the more controversial aspects of Europa Universalis II is the ease with which 'natives' may be either exterminated or assimilated.Each province that is uncontrolled by a ' civilized' nation has a native population of between five hundred and five thousand; furthermore, the population of each 'uncivilized' province is given a rating between zero and 10 to indicate their aggressiveness towards incursions by colonists and traders.The native population is assimilated into the colony once it has become a certain size, and the natives automatically become productive citizens in the economic output of the colony's economy.A peaceable native population can be easily assimilated to create a large, thriving colony without having to allocate troops to protect the colonists.The large and peaceful native population is what makes Australia a desirable colony.However, trying to set up a colony or even a trading post in a province that has a large and aggressive native population will often lead to the extermination of the colonists.This can be prevented by stationing the colony with troops, as even the weakest colonizing troops can usually defeat a large indigenous army.However, once troops enter a province with a native population, the player is presented with the option to ' exterminate natives'; this option is recommended in most strategy guides when dealing with natives of aggression level four or more.
In Europa Universalis II's simulation of Aboriginal culture, they are rated with one or zero aggressiveness depending on the province; this usually ensures their survival because they won't threaten the fledgling colony, and will add a great deal to the community once they are assimilated.In Victoria: Empire Under the Sun, the already assimilated Aboriginal population is shown as a demographic within the wider population of the state, which can be appeased if the government adopts policies which suit their status.A vital part of the game involves setting the agenda for government policies on key issues, such as economics, religion, trade and the military, or, in this case, minority rights.Minorities can be assigned full or limited citizenship, or reduced to slavery.Thus, in the case of Victoria: Empire Under the Sun, the player is able to take a more (or less) enlightened approach to minority rights in Australia than those which were actually adopted during the historic period represented in the game, yet which still fall short of returning lands to indigenous sovereignty.
During play of both games, Australian settlement can take on some rather strange configurations.Competition for Australian colonies among colonizing nations in Europa Universalis II can create counterfactual maps which include part of Australia being controlled by France, Spain or Portugal.This will often lead to minor colonial wars, as powers become embroiled in struggles in Europe and extend the field of combat to the Australian colonies.Australia takes on a much more varied form in Victoria: Empire Under the Sun; while it is mostly controlled by Britain, it is also an important source of coal, making it of great strategic value to other that are trying to industrialize.Britain may then trade parts of Australia with other colonizing powers to gain advantage in another sphere.For example, in one playthrough of the game, Australia had become a Brazilian colony after Britain had traded it for some of Brazil's Caribbean possessions -Cuba, Puerto Rico -won in a war with Spain.This ahistorical but strategic exchange was implemented by the game's AI; deliberate intervention by players further multiplies the possibilities of producing such counterfactual outcomes.
Paradox Player Productions
Paradox Interactive has extensive official forums for all its games, which include player-authored strategy guides and player-created wikis for both of the games discussed.Counterfactual imaginary is cultivated in several ways on these official forums.Crucial for this discussion is the genre of the After-Action Report-often referred to as AARs on the forums-and the discussions of 'modding' the games.In general, a key community function of the forums is to cultivate expert play through an ongoing discussion of the variables in the various games, and how these variables may be understood in the wider context of the games and thus deployed strategically (see Myers, 2003: 44).While this knowledge is useful primarily in playing the games, it is developed through gleaning information from secondary sources or 'paratexts', which can include: Internet sites, chat rooms, bulletin boards, conversations with other players and game magazines.(Consalvo, 2007;Newman, 2008).
In addition, the forums allow the many local contexts of play to be discussed and negotiated, opening forum participants up to multiple perspectives on history.
One element of the considerable creativity on display in the forums is negotiation conducted by the player to establish a stronger resonance between the 'global', one-sided, colonial/industrial version of history presented in the games and a more nuanced 'local' perspective on history (Apperley, 2010: 135-8).This can emerge as modding projects which re-inscribe the physical and human geography of particular provinces, or in substantive creative fictional work like that found in the 'Rise of the Condor' AAR, which uses a combination of writing and screenshots to recount a counterfactual colonial encounter between the Incan and Spanish Empires (Apperley, 2013: 193).
AARs are the recounting of a single game, often in a series of episodes that the author updates as the game is played.Like many player accounts of digital games, the report may be primarily descriptive, or recounted in the form of fiction (Consalvo, 2003).One exemplary forum post reports on a game that remixes the events of the First Afghan War (1839-1842), so that the Persian player manages to avoid the British intervention (Dalrymple, 2013).The player of the Persian faction, Wannabe Tartar, writes: The Afghans, under the leadership of Dost himself, put most of their effort of driving the Persians out from Farah, but they couldn't break enemy lines.
The Shah marched with his army to Kandahar, where he would lay siege to the city.Slowly the Afghans were being pushed back, and realizing that they were not able to withstand the Persians, Dost tried to convince the Shah in signing peace.But the Shah, now smelling victory, continued his advance towards Ghazni and Mazar I Sharif.Although Ghazni didn't fall to the Persians, Mazar I Sharif did, bringing the Persians within less as [sic] 100 kilometres from Kabul.The Shah moved his troops closer to Kabul and when his troops were at the outskirts of the city, Dost was quickly [sic] to offer peace.All province[s] currently occupied, with the exception of Mazar I Sharif, would be seceded to the Persian.The Shah realized that if the war would drag on, the probability of a British intervention force being sent to Afghanistan would increase, so he accepted the treaty offered to him.(Wannabe Tartar, 2006) The After Action Reports act as a dual display of game and writing prowess; in some cases the posts are also illustrated with maps and portraits of the historical figures being discussed.The reports also show other players the tactics adopted to succeed (in Wannabe Tartar's post the goal was to create a Persian Empire).AARs come in many shapes and sizes, as the (sub-)genre of writing is characterized by highly individual approaches.What characterizes AARs as a discernable sub-genre is that they share a 'faithfulness to the gameplay' (Mukherjee, 2016a: 67).The processes of writing and playing are entwined, making the AAR a key example of the imbrication of playing, reading and writing that can be found in digital gaming cultures, which captures 'the ephemeral computer game narrative that is played out in each instance of gameplay' (Mukherjee, 2015: 117).The way that Europa Universalis II or Victoria: Empire Under the Sun both encourage players to set their own goals allows AARs to examine multiple perspectives on history, connecting them to the counterfactual imaginary; however, many ARRs also show a remarkable adherence to what are perceived as the 'historical facts' by trying to recreate events with an adherence to ' accuracy'.
Notions of historical realism and authenticity are also major topics of discussion in the forums.Both forums include lengthy discussions that evaluate and critique how authentically Europa Universalis II and Victoria: Empire Under the Sun represent events in the past.Of particular interest is when the discussion is directed by a forum member who has a perspective based on local knowledge or expertise.For example, one unofficial forum, the Croatia-based Vojska.net, has advocated major changes to the map in Europa Universalis II in order to better represent their historical understanding of the geopolitical division of provinces in the Balkan region of Europe from the 15th to the 19th-century.In order to have the boundaries of Balkan provinces drawn in a historically authentic manner, the community produced and distributed a mod of the Europa Universalis II map.This collaboration on Vojska.netallowed players from outside the region to play in a geographic depiction of the Balkans that was locally defined as historically realistic.While not counterfactual, such activities make players aware of the multiplicity of historical perspectives which Europa Universalis II and Victoria: Empire Under the Sun subsume in a singular view of history.
In the space of the forum the boundaries between playing the game, discussing it and making mods are blurred.The community supports this blurring of activities, as evidenced by numerous threads linking to guides to mods.Both Europa Universalis II and Victoria: Empire Under the Sun also have large collective projects focused on developing systemic improvements to the games.The Victoria Improvement Project is committed to making the major wars of the era more realistic; thus it has developed mods which improve the realism of the game scenarios dealing with the American Civil War, the Franco-Prussian War and World War I.The project also expands the development of technology, and has worked on improving the artificial intelligence of the game (which is the subject of much criticism in the forums).The Alternate Grand Campaign and Event Exchange Project for Europa Universalis II were originally two separate projects that have merged.The focus of the project is to develop more events based on history.In the game, events are typically connected to a certain country, and only countries that were originally intended as playable in the colonial sense have many events (England, France, Portugal, Spain, Sweden).So the project has two purposes: to represent historic events more realistically by modifying events already in the game; and to generate new events that occurred historically in individual nations, but were not originally included in the game.
Individuals and organizations that have a stake in the representation of Australia's history could of course create mods for the game, to either represent the past more accurately, or to create a more fantastic scenario.The tools are available for anyone to do that; there is also ample opportunity to learn the processes of modding through involvement in larger projects.Yet mods made for Australia are not common.For example, one of the mods from the Victorian Improvement Project creates a more realistic immigration flow from the metropolitan centers to the periphery colonies.However, this is a more recent mod, which indicates both the potential and limitations of the practice.The Indigenous People of Oceania (IPO) mod for Europa Universalis IV was created with the aim of depicting the history and presence of the native people of Oceania (Cosmosis7, 2016).The IPO mod creates a diverse representation of historical Aboriginal, Micronesian, Melanesian and Polynesian cultural groups using the framework of the game to present differing lifestyles (nomadic or seafaring), technologies and beliefs.The mod also includes pop-up events with information about the history of each nation.This mod gives a more balanced and inclusive depiction of the indigenous view of history in the region, capturing some of the specific attributes of the different indigenous nations of the Pacific.While this mod does add more historical detail that provides access to a different perspective on historical events, it clearly establishes the limitations with which it functions.
The mod must operate within the parameters provided by the original software.The indigenous viewpoint is not established on its own terms, but through adding detail to the colonial framework in which the game operates.
Conclusion
The way that the dynamics of the colonial period in Australia are encoded in the two games discussed here embeds the logic of the contemporary colonists, the most problematic aspect of which is the representation of Aboriginal culture as homogenous 'natives' to be assimilated or killed as the player sees fit.However, this does not mean that players accept the ideologies presented in games like Europa Universalis II or Victoria: Empire Under the Sun as valid.The shared paratexts of the counterfactual community demonstrate that players establish negotiated positions in relation to the ' official' history presented, which draw on their own experiences of local and popular culture.Indeed, their negotiations are made tangible through remixing the official history presented by the game, as other perspectives are developed through the creator's own experiences and counterfactual imagination, which may be informed by highly localized knowledge and concerns.The paratexts are based on skills and literacies that are relatively traditional, such as After Action Reports, and the more advanced digital literacies required to mod the games.
Crucially, these unofficially published remixes are evidence of a wider reflexivity in relation to Europa Universalis II and Victoria: Empire Under the Sun.While only a small proportion of the counterfactual community is engaged in producing the content, many other players will consume it.Thus, these digital textual practices inform the experiences of players beyond these communities.However, these practices-which may otherwise challenge or disrupt dominant paradigms of history-are limited because they are anchored in software which portrays the process of colonization and exploitation of indigenous people from a dominant white colonial perspective (Mukherjee, 2016b;2017;Shaw, 2015).Thus, these practices also contribute to the circulation and dominance of official versions of history, rather than offering a radical new perspective.Yet they also illustrate that official accounts of the past, however dominant, remain subject to localized interpretation.This impacts on the everyday understanding of history in a significant way, not just because it highlights the disjunction between official history and lived experience, but also because it makes transparent the power behind the official version of history.
'ideological critique': games represent history in a way that demonstrates a particular ideological bias (2006: 96); 3. The 'informatic critique': games are algorithms, a form of information that provides an allegory for the contemporary control society (2006: 102-3).
: 102-103) For Galloway, Sid Meier's Civilization III embodies the principles of informatics; history has been turned into manageable and quantifiable variables that, he argues, allegorically represent what Gilles Deleuze calls the ' control society' (1992).Similarly, in Gamer Theory (2007), McKenzie Wark suggests that strategy games should be read as allegories: It [Civilization III] embraces all differences by rendering all of space and time as being of the same quality -by reducing space and time to quantity.And finally, the next level appears: the expansion of topology outwards, beyond America, to make America equivalent to all of time and space.(2007: section 073) , and then into digital distribution.Paradox Interactive's (unfortunately named) digital distribution portal, Gamer's Gate, began operation in April 2006.It was successfully spun off into a separate business in 2008, and now offers over six thousand digital titles, DRM free. | 2019-05-11T13:07:27.898Z | 2018-03-23T00:00:00.000 | {
"year": 2018,
"sha1": "cde783019c0460538d39903658383530a2b63c30",
"oa_license": "CCBY",
"oa_url": "https://olh.openlibhums.org/articles/10.16995/olh.286/galley/147/download/",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "f4ff58de9b6e1070a5a49c9c113032efe2333f77",
"s2fieldsofstudy": [
"Sociology"
],
"extfieldsofstudy": [
"Sociology"
]
} |
267982724 | pes2o/s2orc | v3-fos-license | Relationship between the Viral Load in Patients with Different COVID-19 Severities and SARS-CoV-2 Variants
SARS-CoV-2 has spread throughout the world since 2019, changing in its genome and leading to the appearance of new variants. This gave it different evolutionary advantages, such as greater infectivity and/or a greater ability to avoid the immune response, which could lead to an increased severity of COVID-19 cases. There is no consistent information about the viral load that occurs in infection with the different SARS-CoV-2 variants, hence, in this study we quantify the viral load of more than 16,800 samples taken from the Mexican population with confirmed diagnosis of COVID-19 and we analyze the relation between different demographic and disease variables. We detected that the viral load caused by different variants differs only in the first two days after the onset of symptoms, being higher when infections are caused by the delta variant and lower when caused by omicron. Furthermore, the viral load appears to be higher in outpatients compared to hospitalized patients or in cases of death. On the other hand, no differences were found in the viral load produced in vaccinated and unvaccinated patients, nor did it differ between genders.
Introduction
In late 2019, SARS-CoV-2 emerged in Wuhan, China.This virus spread worldwide and by the end of 2022 had already caused more than 6 million deaths, in addition to global economic havoc [1].
Since its emergence, SARS-CoV-2 has been mutating, giving rise to variants, which may have characteristics that confer evolutionary advantages over the original Wuhan virus [2], thus causing a current interest to identify variants that may be of concern.For this reason, variants have been classified by the Centers for Disease Control and Prevention (CDC) based on their attributes as a variant under monitoring (VUM), variant of interest (VOI), variant of concern (VOC), and variant of high consequence (VOHC).In addition, the World Health Organization (WHO) convened a group of experts to assign an easy and practical nomenclature to these variants, and they agreed on letters of the Greek alphabet [3].In Mexico, multiple SARS-CoV-2 variants have been documented throughout this pandemic, including Alpha, Beta, Mu, Lambda, Gamma, Delta and Omicron [4].
Around the world, there are already several studies showing that some of these variants, compared to the original Wuhan variant, have a greater capacity to spread and to evade the immune response, for example, reducing the ability of antibodies, produced by the host, to neutralize the virus and prevent its entry into the cell [5].Some studies mention that infection with the Delta variant, for example, results in a higher viral load compared to infection with other variants [6][7][8][9].This could indicate that some variants may have a greater replicative capacity, thus generating more copies of the virus during infection.Viral load, together with the number of antibodies produced by the host, appears to affect infectiousness, although there is still controversy about this assumption in the literature [6,10,11].
In addition, some studies affirm that the severity of COVID-19 is related to a higher viral load of SARS-CoV-2, associating this variable with a higher probability of host death [12][13][14], although there are also contradictory results on the subject [15,16].
Nevertheless, other studies also mention the influence of the time elapsed from the onset of symptoms to the collection of the sample with respect to the viral load being detected [17,18].For this reason, studies that take this variable into consideration are needed, so that better conclusions on the subject can be reached.
Therefore, viral load appears to be important for issues related to the transmission and prognosis of COVID-19 disease progression and could vary due to different factors.Due to the contradictions currently found in the literature, a retrospective analysis was performed in this study using data from more than 16,800 patients, with the intention of generating new information to have a better understanding of the prognostic value of viral load.
Study Design
A retrospective cross-sectional study was conducted to determine the viral load caused by infection with different variants of SARS-CoV-2 and its relation to clinical outcomes, demographic data and vaccination status of Mexican patients.
For this purpose, SINAVE (National Epidemiological Surveillance System) and SISCEP (Epidemiological Control System) databases were used.Data were initially selected from 25,002 nasopharyngeal exudate samples from patients with COVID-19 confirmed diagnosis, which had also been sequenced by the Mexican Genomic Surveillance Consortium (CoViGen-Mex).We only included the samples that were processed through the same PCR method (Logix Smart COVID-19 ® ), with an interval between the onset of symptoms and collection between 1 and 11 days, with all clinical, demographic and laboratory data (Ct value of the SARS-CoV-2 RdRP gene), and with variant identification.From the total remaining (16,984), the viral load of each sample was calculated using the ∆∆CT method, with generation of a standard curve of the SARS-CoV-2 RdRP gene.Thereafter, 104 samples with outlier ∆CT values were identified using the interquartile range (IQR) (samples with the CT value 3 times the interquartile range value below quartile 1 or above quartile 3).Because these samples contained a CT value that was numerically distant from the rest of the data, which could lead to misleading results, we decided to exclude these samples, leaving a total of 16,880 included for the analysis carried out in this study.Figure 1 shows the summary of how the samples were selected and the associated data (Figure 1).
The information in these databases comes from all over the Mexican territory and was received at the four Epidemiological Surveillance Laboratories of the Mexican Institute of Social Security (IMSS) between 1 March 2021, and 4 September 2022.The methodology used for diagnosis (carried out by the Central Epidemiology Laboratory, IMSS), viral load determination (by generating a standard curve made specifically for this project), and sequencing (made by CoViGen-Mex) was the same for all the samples and is described next [19].
Identification of Positive Cases
For all the samples analyzed in this study, the RNA was obtained from a 140 µL pharyngeal exudate sample taken with a Dacron swab (Copan Diagnostics, Corona, CA, USA, Catalog: 159C) and stored in a viral transport medium (BD™ Universal Viral Transport System, East Rutherford, NJ, USA.Catalog: 220220), with the QIAamp 96 Viral RNA kit (QI-AGEN, Hilden, Germany) used according to the manufacturer's instructions; the Logix Smart COVID-19 ® amplification kit (COVID-K-001; Salt Lake City, UT, USA) in the 7500 Fast Real-Time PCR System (Applied Biosystems ® , Foster City, CA, USA) was used for the diagnosis of COVID-19.This kit detects the RdRP (which is a specific gene for SARS-CoV-2) and RNaseP (RP) genes (which is an endogenous gene of human epithelial cells and was used as an internal control).The genes were evaluated by adding 5 µL of the RNA in a multiplex master reaction of the aforementioned kit, and with the following thermo-cycling conditions: one cycle at 45 • C for 15 min and 95 • C for 2 min and 50 cycles at 95 • C for 3 sec and 55 • C for 32 s.A positive result implied that both genes were detected with Ct values below 37. Samples with CT values greater than 30 and less than 37 were excluded from this protocol as the CoViGen-Mex only sequences samples with CT values below 30.
-
Sequencing of the SARS-CoV-2 genome (CoViGen-Mex) Sequencing was carried out using RNA leftovers from samples processed by the LAVEs that tested positive with the aforementioned methodology for the COVID-19 diagnosis confirmation.All assays were performed by the Mexican Consortium for Genomic Surveillance (CoViGenMex) using the Illumina NextSeq500 or Miniseq platforms, with cells where 500 or 200 samples were sequenced in parallel, respectively.
The data was then analyzed through a computational pipeline analysis designed by CoViGenMex.The viral sequences, once their quality was confirmed, were released to an international platform Global Initiative on Sharing All Influenza Data for SARS-CoV-2 and influenza sequences, (GISAID www.gisaid.org(accessed on 26 January 2024)), and to the national database, MexCoV2 (http://mexcov2.ibt.unam.mx:8080/COVID-TRACKER/(accessed on 26 January 2024)).During the analysis of the obtained sequences, mutations were identified.These mutations indicated the lineage of the different SARS-CoV-2 variants that circulated in Mexico, and its results, were deposited in the SINAVE and SISCEP platforms that we used in this work.
Validation of the endogenous control
With the intention of comparing the viral load generated by the different variants of SARS-CoV-2, a relative quantification analysis was designed using the human RP gene as an endogenous gene.First, it was validated that the expression levels of the reference gene (RP) were similar to those of the problem gene of RNA-dependent RNA polymerase (RdRP).
A concentrated solution of 10 10 theoretical copies were prepared for the problem gene and the endogenous gene, taking as reference the weight of a base pair of 650 Da, and a size of 200 bp for RdRP and 180 bp for RP.Subsequently, the number of molecules per microgram was calculated using Avogadro's number and the number of moles [20].
From the concentrated solutions, in each case, decimal dilutions were made to generate a 10-point standard curve (10 10 -100 copies/uL).The slope of the curve generated by plotting the ∆CT obtained from the CT values for the different dilutions was between −0.1 and 0.1, so the RP gene could be used as an endogenous control in this case [21].
The 2 −∆∆CT method, better known as ∆∆CT, is a relative quantification strategy for the results of a qPCR or RT-qPCR, which uses the generated threshold cycle (CT), assuming an amplification efficiency of 100% in the analyzed samples.The two "deltas" present in the name of this method refer to the fact that the expression level of a target sample is compared to a control or reference sample, also using a reference gene as a normalizer.The results of this method are usually reported in increments from one sample to the other; however, in this work, we only subtract the CT of the endogenous gene (RP) from the CT of amplification of the RdRP gene of SARS-CoV-2.Subsequently, using the resulting CT of each sample (∆CT), we analyzed the means for comparisons between groups.The ∆CT values shown are inversely proportional to the SARS-CoV-2 viral load.
Statistical Analysis
Descriptive statistics were used to report averages and percentages; these were calculated with their respective 95% confidence intervals.Outliers were identified using the IQR (interquartile range).The chi-square test was used to compare categorical variables.To cross-reference the factors with the numerical variables, the one-way parametric ANOVA test or the nonparametric Kruskal-Wallis H test was used, as appropriate.p values < 0.05 were considered significant.RStudio (version 2023.06.1+524,Boston, MA, USA) and Microsoft Excel (version 16.66.1,Redmond, WA, USA) were used for the analysis and generation of graphs.
Results
Of the 16,984 samples that went through the viral load quantification process, only 16,880 were analyzed, since 104 were identified as outliers (analysis carried out by the interquartile range (IQR) method).
The analysis of the time between the onset of symptoms and collection of the sample, which we defined as the timing of sample collection, showed that the viral load decreased significantly over the days (p < 0.05, Figure 2); therefore, the total samples analyzed were divided according to this parameter, even though most of the analyzed data came from patients who sought medical attention between 1 and 4 days after the symptoms appeared (Group A and B), as shown in Table 1.
IQR (interquartile range).The chi-square test was used to compare categorical variables.To cross-reference the factors with the numerical variables, the one-way parametric ANOVA test or the nonparametric Kruskal-Wallis H test was used, as appropriate.p values < 0.05 were considered significant.RStudio (version 2023.06.1+524,Boston, MA, USA) and Microsoft Excel (version 16.66.1,Redmond, WA, USA) were used for the analysis and generation of graphs.
Results
Of the 16,984 samples that went through the viral load quantification process, only 16,880 were analyzed, since 104 were identified as outliers (analysis carried out by the interquartile range (IQR) method).
The analysis of the time between the onset of symptoms and collection of the sample, which we defined as the timing of sample collection, showed that the viral load decreased significantly over the days (p < 0.05, Figure 2); therefore, the total samples analyzed were divided according to this parameter, even though most of the analyzed data came from patients who sought medical attention between 1 and 4 days after the symptoms appeared (Group A and B), as shown in Table 1.
The Delta and Omicron variants were detected most frequently during the analysis period, with prevalences of 44.2 and 49.0%, respectively.The Alpha and Gamma variants circulated 21.4 and 12.9 times less than Omicron, and the least frequent were Beta, Lambda and Mu, which together were detected only in 115 (0.7% of the total).Therefore, in this work, and for ease of analysis, we combined them and refer to them as "Others" (Table 1).
In Table 1, we also include other data, such as those referring to sex, age, status of vaccination and type of patient (outpatient, hospitalized or death) at the time of collection of the sample.The Delta and Omicron variants were detected most frequently during the analysis period, with prevalences of 44.2 and 49.0%, respectively.The Alpha and Gamma variants circulated 21.4 and 12.9 times less than Omicron, and the least frequent were Beta, Lamb-da and Mu, which together were detected only in 115 (0.7% of the total).Therefore, in this work, and for ease of analysis, we combined them and refer to them as "Others" (Table 1).
In Table 1, we also include other data, such as those referring to sex, age, status of vaccination and type of patient (outpatient, hospitalized or death) at the time of collection of the sample.
When analyzing the data, we observed that only at the beginning of the infection (Group A) the viral load produced at the time of collecting the samples is different between the variants (Figure 3).We found that individuals infected with Gamma had a significantly higher viral load compared to those infected with Omicron and Alpha (p < 0.05, Figure 3A), as well as those infected with Delta in relation to those infected with Omicron (p < 0.05, Figure 3A).After 3 days the onset of symptoms appeared, no difference in the viral load produced during infection was detected between the different variants, as shown in Figure 3B-D.On the other hand, the SARS-CoV-2 viral load seemed to be related to the age of the participants; however, as in the analysis of the variants, we detected differences only at the beginning of the infection, in this case, up to 4 days after the onset of symptoms (Group A and B).
As shown in Figure 4, older adults had a lower viral load compared to the other age Another analysis carried out was that of the viral load with respect to the sex of the patient.However, no statistically significant differences were found in any of the Groups, as shown in Figure S1.
On the other hand, the SARS-CoV-2 viral load seemed to be related to the age of the participants; however, as in the analysis of the variants, we detected differences only at the beginning of the infection, in this case, up to 4 days after the onset of symptoms (Group A and B).As shown in Figure 4 Another variable considered in this work was the severity of COVID-19, due to its possible relation with the viral load of SARS-CoV-2.Contrary to expectations and, regardless of the time of collecting the sample, outpatients always showed a higher viral load than hospitalized patients and/or those who had died (Figure 5).This analysis is performed in a general manner, but it should be noticed that the vast majority of hospitalized and dead patients were older adults, as shown in Table 2.The period of time included in this study covered the beginning of the vaccination campaigns in Mexico in which, at that time, contemplated only adults, prioritizing the immunization of older adults; therefore, vaccination status could be affecting the results.However, even if this variable was taken into consideration, no significant differences were found (Figure S2).
Another variable considered in this work was the severity of COVID-19, due to its possible relation with the viral load of SARS-CoV-2.Contrary to expectations and, regardless of the time of collecting the sample, outpatients always showed a higher viral load than hospitalized patients and/or those who had died (Figure 5).This analysis is performed in a general manner, but it should be noticed that the vast majority of hospitalized and dead patients were older adults, as shown in Table 2. Still on this topic, we decided to also analyze the different variants regarding patient severity.The results showed that, unlike viral load, which is only related to variants at the onset of infection, severity status appears to depend on variants at all stages of the disease (Figure 6 and 7).
In addition, analyses were performed to further elucidate the relation between viral load, severity level, and different variants.The results showed that the viral load of those infected with the Delta variant was lower in severe cases compared to outpatient cases, throughout the evolution of the disease (Figure 8).On the other hand, this pattern could not be detected in patients infected with other variants, except in groups B and D in those infected with the Gamma variant, and in group A in those infected with Omicron.Still on this topic, we decided to also analyze the different variants regarding patient severity.The results showed that, unlike viral load, which is only related to variants at the onset of infection, severity status appears to depend on variants at all stages of the disease (Figures 6 and 7).
According to the correspondence analysis with dimension reduction (Figure 7), we can see that the Delta variant is the most related to death cases.When dichotomizing Delta with respect to the other variants, the differences were significant in all groups (Group A: 10.2% vs. 2.4%, p < 0.05; Group B: 14.7% vs. 6.3%, p < 0.05; Group C: 23.8% vs. 15.2%,p < 0.05; Group D: 28.9% vs. 23.1%,p < 0.05; and independent of OPT: 18.4% vs. 6.0%,p < 0.05).On the other hand, this same analysis showed that the Omicron variant is the most related to outpatient cases and, when dichotomizing this variant with respect to the others, the differences were significant except in Group D (Group A: 83.7% vs. 72.4%,p < 0.05; Group B: 74.6% vs. 62.7%, p < 0.05; Group C: 52.2% vs. 45.1%,p < 0.05; Group D: 22.3% vs. 27.0%,p > 0.05; and independent of OPT: 76.3% vs. 53.9%,p < 0.05).In addition, analyses were performed to further elucidate the relation between viral load, severity level, and different variants.The results showed that the viral load of those infected with the Delta variant was lower in severe cases compared to outpatient cases, throughout the evolution of the disease (Figure 8).On the other hand, this pattern could not be detected in patients infected with other variants, except in groups B and D in those infected with the Gamma variant, and in group A in those infected with Omicron.
Discussion
This is the first study that evaluates SARS-CoV-2 viral load distribution in so many epidemiological data in a large number of patient samples in Mexico.A total of 16,880 samples were analyzed to investigate the relation among vaccination status, patient outcomes, age, sex, and SARS-CoV-2 variant with respect to the viral load produced during infection, which was determined by using the ΔCT method, normalizing with the CT value of the housekeeping RP gene and, thus, eliminating the variability due to sampling as mentioned in other articles [10].
Discussion
This is the first study that evaluates SARS-CoV-2 viral load distribution in so many epidemiological data in a large number of patient samples in Mexico.A total of 16,880 samples were analyzed to investigate the relation among vaccination status, patient outcomes, age, sex, and SARS-CoV-2 variant with respect to the viral load produced during infection, which was determined by using the ∆CT method, normalizing with the CT value of the housekeeping RP gene and, thus, eliminating the variability due to sampling as mentioned in other articles [10].
According to the obtained results, the viral load was significantly higher at the beginning of the infection and decreased over time.Other studies also showed similar results, with the viral load decreasing as the disease progressed [16,[22][23][24], even though the sample collection and diagnostic method were different.For this reason, all analyses in this study were performed independently for the groups formed with respect to the time between the onset of symptoms and collection of the sample (A, B, C, and D).
One of the main concerns that we tried to resolve in this work was the possibility that the mutations that gave rise to the new SARS-CoV-2 variants could affect the speed of virus replication in patients, causing a higher viral load during infection and consequently a worse outcome of the disease.Regarding this issue, our results showed differences between the viral loads of the different variants only at the beginning of the infection (Group A), when the Delta and Gamma variants seemed to produce a higher number of copies than the Alpha and Omicron variants.Similarly, in a study carried out by Puhach and collaborators [23], a higher viral load was found in Delta than in Omicron at the beginning of the infection.Other studies also suggested that the viral load in patients infected with the Delta variant was higher than in those infected with the Omicron variant [6] or found a higher viral load with Delta variant infection compared to the Alpha variant [25].However, some studies did not detect any difference, such as that of Yuasa and collaborators [26], where they analyzed 694 nasopharyngeal exudates through RT-qPCR and sequencing and reported the number of copies produced by the Delta variant vs the Omicron variant.Other studies found that the Delta variant presented a viral load ten times higher than the historical variants and a significantly higher difference with the Beta variant, but no statistical difference was found between the Delta and the Alpha [7].
We also compared the different COVID 19 variants with the patient's severity; we found statistical differences between Delta and Omicron with the other variants, in which infection with Delta was associated with dead patients, and Omicron was associated with outpatients.Other studies found similar results; one of them showed that during the outbreak of the SARS-CoV-2 Delta variant in Vietnam, the case fatality rate was higher [27].Another study which included 87 pediatric cases, infected with Alpha (5.7%), Delta (60.9%), and Omicron (33.3%) variants, showed that severe disease, distress, and myalgia were more frequent in Delta-infected patients [28].Zachary and collaborators studied 102315 confirmed COVID-19 cases, of which 20, 770 were infected with the Delta variant, 52, 605 with the Omicron B.1.1.529variant, and 28, 940 with the Omicron BA.2 subvariants.Mortality rates were 0.7% for Delta (B.1.617.2),0.4% for Omicron (B.1.1.529),and 0.3% for Omicron (BA.2).Finally, they concluded that the Omicron BA.2 subvariants were significantly less severe than that of the Omicron and Delta variants [29].Also, in a study that compared inflammatory markers among patients hospitalized during Omicron infection with those of Alpha and Delta, showed that levels of CRP in Delta and Alpha were significantly higher compared to Omicron; the same trend was observed for ferritin, alanine aminotransferase, aspartate aminotransferase, lactate dehydrogenase, and albumin.So, in accordance with our results, the mortality in Delta and Alpha was higher than Omicron [30].
On the other hand, our study showed that there were differences in the viral load detected in the different types of patients (outpatients, hospitalized patients and deceased patients) in all groups; that is, these differences occurred independently of the timing of sample collection.Interestingly, outpatients always had a higher viral load than hospitalized patients and those who died from the disease.In the literature, some studies propose that the viral load in hospitalized patients may be lower because the severity of the disease is due to factors such as coinfections associated with SARS-CoV-2.Garay and collaborators [31] estimated the association of bacterial pneumonia with the mortality of patients due to COVID-19 and found that 89 of 252 patients tested positive for bacteria that cause pneumonia, which increased the percentage of deaths and could explain why hospitalized patients may have a low viral load but greater severity due to secondary infection.However, unlike our results, Tsukagoshi and collaborators [30] found that the viral load in deceased patients was significantly higher; although, we must consider that in their study, the sample was composed of only 286 individuals.Similarly, Liu et al. also concluded that patients with severe COVID-19 tend to have a high viral load and a long virus shedding period [31].Contradictions in results found in the literature may be due to differences in study design and timing of sample collection.Another example of the diversity of published results and conclusions is that some studies have reported that viral load is not related to patient outcome [15,18] or that it is independently correlated with the risk of hospital mortality [32].Killingley and collaborators [16] conducted a controlled study, inoculating 36 people who had not been in contact with the virus or those previously vaccinated, concluding that there is no relationship between viral load and patient outcome.Although these authors were able to control for many of the confounding factors that could affect the results of their study, their n was very small, so we emphasize the need to conduct larger scale studies to shed more light on this issue.
Regarding the viral load that develops in different age groups, Pu-hach and collaborators [23], did not find significant differences in their review; however, they only compared children and adults.Many studies also found no differences between groups of different ages [32][33][34][35], although it must be taken into consideration that they were smaller studies, that the sampling techniques were not homogeneous, and that they all included a small number of pediatric patients.Aranha and collaborators [25] mentioned that the virus elimination time does not depend on the patient's age.The results obtained in the aforementioned studies are not consistent with our results (Groups A and B).According to our results, a cross-sectional study in Ghana including 9549 positive samples showed the lowest median viral loads in those aged 10 years and the highest in those aged 71-80 years [36]; we also detected differences between adolescents and adults with respect to older adults, who showed a lower viral load.On the contrary, there was also another article that showed increasing SARS-CoV-2 viral load with increasing age, especially showing low viral loads in children under 12 years of age [37], but they used different sampling methods and did not used the ∆CT method to measure viral load.
One factor that was thought to affect viral load was vaccination.However, in general, in our analysis, we did not find differences in viral load between vaccinated and unvaccinated people, which agrees with the findings of works such as those of Levine and collaborators [38] and Singanayagam and collaborators [39].The latter demonstrated that although vaccination does not generate changes in the viral load, it does reduce infections by the Delta variant and accelerates the elimination of this virus.On the other hand, in another study carried out on the Delta variant, there was a drastic decrease in the viral load in vaccinated people [23].
Although in this study we analyzed a large amount of data, some variants such as Delta and Omicron had a lot of data; however, some others such as alpha, beta, lambda or mu did not have enough data.The same thing happened with the age groups; the groups from 20 to 59 or over 60 had much more data than the groups from 0 to 9, or from 10 to 19, so it would have been interesting to have a greater number of data from the groups mentioned above.Also, there are a large number of variables that can influence the viral load, although in this work we take into account a large number of variables, we do not include the symptoms or comorbidities of each patient.
Conclusions
In this study we detected great differences in the viral load depending on the disease evolution (regardless of variable type) and, although the patient's outcome may vary depending on the SARS-CoV-2 variable contracted, the viral load produced between the variables only differs at the beginning of the disease and appears to be unrelated to age, sex, or vaccination status.Contrary to what was expected, the viral load is much lower in hospitalized patients, including deaths compared to outpatients.The beliefs are that there
Figure 1 .
Figure 1.Database curation diagram.The information in these databases comes from all over the Mexican territory and was received at the four Epidemiological Surveillance Laboratories of the Mexican Institute of Social Security (IMSS) between 1 March 2021, and 4 September 2022.The methodology used for diagnosis (carried out by the Central Epidemiology Laboratory, IMSS), viral load determination (by generating a standard curve made specifically for this project), and sequencing (made by CoViGen-Mex) was the same for all the samples and is described next [19].
Figure 2 .
Figure 2. ΔCT mean with respect to the timing of sample collection.A, B, C, D = Group A, B, C and D, respectively.**** p < 0.00005.
Figure 2 .
Figure 2. ∆CT mean with respect to the timing of sample collection.A, B, C, D = Group A, B, C and D, respectively.**** p < 0.00005.
Figure 3 .
Figure 3. Analysis of the ∆CT detected in patients infected with the different variants.(A-D) = Group A, B, C and D, respectively.* p < 0.05, *** p < 0.0005.
Figure 6 .
Figure 6.Relation of each variant with respect to COVID-19 severity.(A-D) results obtained for group A, B, C and D, respectively.(E) results obtained for all the data analyzed in the study.Figure 6. Relation of each variant with respect to COVID-19 severity.(A-D) results obtained for group A, B, C and D, respectively.(E) results obtained for all the data analyzed in the study.
Figure 6 .
Figure 6.Relation of each variant with respect to COVID-19 severity.(A-D) results obtained for group A, B, C and D, respectively.(E) results obtained for all the data analyzed in the study.Figure 6. Relation of each variant with respect to COVID-19 severity.(A-D) results obtained for group A, B, C and D, respectively.(E) results obtained for all the data analyzed in the study.
Figure 7 .
Figure 7. Correspondence analysis with dimension reduction of the relationship between infection with different variants and the severity developed by the host.(A-D) results obtained for group A, B, C and D, respectively.(E) results obtained for all the data analyzed in the study.
Figure 7 .
Figure 7. Correspondence analysis with dimension reduction of the relationship between infection with different variants and the severity developed by the host.(A-D) results obtained for group A, B, C and D, respectively.(E) results obtained for all the data analyzed in the study.
Figure 8 .
Figure 8. Relationship between ΔCT and disease severity in patients infected with the different variants.
Figure 8 .
Figure 8. Relationship between ∆CT and disease severity in patients infected with the different variants.
Table 1 .
Demographic, clinical and type of patient data.
Table 2 .
Severity status of the participants of each age group.
Table 2 .
Severity status of the participants of each age group. | 2024-02-27T18:03:10.022Z | 2024-02-20T00:00:00.000 | {
"year": 2024,
"sha1": "6ebeef627759e71b19de76e5c85732a2a38951ff",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "72feedc5d87a629bfa50f8414d66edd97d9c1c75",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
248973082 | pes2o/s2orc | v3-fos-license | Validating Precise Orbit Determination from Satellite-Borne GPS Data of Haiyang-2D
: Haiyang-2D (HY-2D) is the fourth satellite in the marine dynamic satellite series established by China. It was successfully launched on 19 May 2021, marking the era of the 3-satellite network in the marine dynamic environment satellite series of China. The satellite’s precision orbit determination (POD) and validations are of great significance for ocean warning and marine altimetry missions. HY-2D is equipped with a laser reflector array (LRA), a satellite-borne Doppler Orbitography and Radiopositioning Integrated by Satellite (DORIS) receiver, and a satellite-borne dual-frequency GPS receiver named HY2 that was independently developed in China. In this paper, the quality of GPS data collected by the HY2 is analyzed based on indicators such as the multipath effect, cycle slips, and data completeness. The results suggest that the receiver can be used in POD missions involving low-Earth-orbit (LEO) satellites. The precise orbits of HY-2D are determined by the reduced-dynamics (RD) method. Apart from POD, validation of orbit accuracy is another important task for LEO POD. Therefore, two external validation methods are proposed, including carrier differential validation using one GPS satellite and inter-satellite differential validation using two GPS satellites. These are based on space-borne carrier-phase data, and the GPS satellites used for POD validation do not participate in orbit determination. The results of SLR range validation cannot illustrate the orbit accuracy in x, y, and z directions particularly, so to make validation results more intuitive, the SLR three-dimensional (3D) validation is proposed based on SLR range validation, and the RMSs in x, y, and z directions are 2.66, 3.32, and 2.69 cm, respectively. The results of SLR 3D validation are the same as those of SLR range validation, which proves that the new external validation method provided by SLR 3D is reliable. The RMSs of carrier differential validation and inter-satellite differential validation are 0.68 and 1.06 cm, respectively. The proposed validation methods are proved to be reliable.
Introduction
Haiyang-2D (HY-2D) is the fourth marine dynamic environment satellite in China. It was successfully launched on 19 May 2021, and the data receiving plan was implemented at the National Satellite Ocean Application Service (NSOAS) on 25 May 2021; the receiving system condition is good, the antenna tracking is normal, and the quality of data is good. The ground receiving system is in the normal task receiving stage. HY-2D, HY-2B, and validation, and SLR three-dimensional (3D) validation. In Section 2, the principle and derivation of three methods of orbit accuracy evaluation are introduced in detail. In Section 3, the data quality of HY-2D collected by the satellite-borne GPS receiver is analyzed with TEQC software regarding the multipath error, cycle slips, and data completeness [21]. The orbit determination strategies are analyzed, and the RD method [22] is used to determine the precise orbits of HY-2D. Carrier-phase residuals are used to evaluate the internal accuracy of POD, and SLR range validation and DORIS validation are used to evaluate the external accuracy. The carrier differential validation, inter-satellite differential validation and SLR 3D validation proposed in this paper are used to evaluate the accuracy of POD, and the reliability of the three methods is verified. Finally, the conclusions are presented.
Methods
Assessment of POD accuracy is mainly divided into two categories. The internal validation methods only use some data during POD to check the orbit determination results, such as carrier-phase residuals and overlapping orbit validation. The external validation uses data that were not participated in the orbit determination to check the satellite POD, such as comparing with orbits determined by other techniques and SLR range validation [23]. However, in the absence of other technical orbit determination results, there is only one external method to validate the POD, and the SLR range validation can only show the distance difference and cannot evaluate the POD accuracy in x, y, and z directions.
In this paper, three methods are proposed for the external validation of POD: carrier differential validation, inter-satellite differential validation, and SLR 3D validation. The carrier differential validation uses the carrier-phase data of one GPS satellite and intersatellite differential validation uses the carrier-phase data of two GPS satellites out of all viewed GPS satellites. Since the selected GPS data were not participated in the orbit determination, they can be regarded as external data to achieve validations of orbits accuracy. The SLR 3D validation can evaluate errors in the x, y, and z directions of the POD, making the results more intuitive. Since none of the data used for the validation has been participating in the orbit determination, these three methods are considered external validation.
The carrier differential validation and inter-satellite differential validation use the GPS carrier-phase data to evaluate POD. Errors caused by the ionospheric delay error, satellite clock bias, receiver clock bias, integer ambiguity, and other factors must be considered if the carrier-phase data are used to validate the accuracy of POD without any processing of difference. While the double-difference method can eliminate the effects of these errors on the carrier, it is a good idea to use the method.
Carrier Differential Validation Method
The carrier differential validation is a comparison between the distance obtained by using carrier-phase data of adjacent epochs and the distance calculated by coordinates of the GPS satellite and the coordinates of LEO; the orbits of LEO are determined by the RD method. The influence of errors is then eliminated through making difference to obtain the residuals of validation.
HY2 is a dual-frequency receiver, and satellite-borne GPS observations of HY-2D include two frequency carriers (L 1 and L 2 ) [24][25][26], so the linear combination method is used to obtain the Ionosphere-Free combination [27] observation as: where f 1 and f 2 are the frequencies of L 1 and L 2 , respectively; L 1 and L 2 are the carrier-phase observations of the two frequencies; L IF is the Ionosphere-Free combination observation. The carrier-phase observations corresponding to the two frequencies at epoch t i are ϕ 1 (t i ) and ϕ 2 (t i ), respectively, and c is the speed of light. The linear combination (LC) of phase data is [27]: where LC i is the LC observation at epoch t i , ϕ 1 (t i ) and ϕ 2 (t i ) are the carrier-phase observations of the two frequencies at epoch t i . The difference between the LC observations in the two adjacent epochs is found to obtain the carrier distance difference ∆LC i : where LC i is the LC observation at epoch t i , and LC i+1 is the LC observation at epoch t i+1 . The geometrical distance ρ i between the HY-2D orbit and the GPS satellite orbit can be calculated as: where x h , y h , z h represent the position of HY-2D obtained by POD, x g , y g , z g represent the position of the GPS satellite, and ρ i is the geometrical distance. The difference of geometrical distances between two adjacent epochs: where ρ i is the geometrical distance at epoch t i , and ρ i+1 is the geometrical distance at epoch t i+1 . The difference between two adjacent epochs is found by using the carrier-phase distance difference and geometrical distance difference, that is, Equation (3) minus Equation (5): Equation (6) is used to calculate ∆ i and ∆ i+1 of two adjacent epochs and find the difference again: The residuals can be obtained, and the effect of errors such as satellite clock bias and ambiguity are eliminated.
The core idea of carrier differential validation is that the carrier distance variation and geometrical distance variation between one GPS satellite that is not participating in LEO POD and the LEO satellite at adjacent epochs should be the same in theory. The influence of other errors is eliminated by finding the difference, and the residuals are obtained as a result of the orbital accuracy validation.
Inter-Satellite Differential Validation Method
The inter-satellite differential validation is a method that uses carrier-phase data of two GPS satellites that are not participating in LEO POD and the precise ephemeris of GPS to calculate the difference between the carrier-phase distances of the adjacent epochs of the two satellites. The orbit determined of LEO and the precise ephemeris of GPS are then used to calculate the geometrical distance between the HY-2D and the GPS satellite to obtain the difference between the carrier-phase distance and the geometrical distance. Finally, the result is differentiated to eliminate the influence of other errors and determine the validation residuals.
Equation (2) is used to obtain Ionosphere-Free combination observations for carrierphase data, and the carrier distance after eliminating the influence of the ionosphere is LC i . The difference between the carrier-phase distance of GPS satellite a and that of GPS satellite b at the same epoch: where LC ai represents the carrier-phase distance between GPS a and HY-2D, and LC bi represents the carrier-phase distance between GPS b and HY-2D; ∆LC i is the difference of carrier-phase distance between GPS a and GPS b. The geometrical distances ρ ai and ρ bi are calculated using the coordinates of GPS and the coordinates of HY-2D. The difference is found to obtain ∆ρ i : where ρ ai represents the geometrical distance between GPS a and HY-2D, and ρ bi represents the geometrical distance between GPS b and HY-2D. The carrier-phase distance difference and geometrical distance difference between the two GPS satellites and HY-2D are made differences to obtain the residual e i : Since the two GPS satellites that participate in the validations are moving in different directions and at different speeds, the e i values of the adjacent epochs are differenced to obtain the final residual ∆e i : The core idea of inter-satellite differential validation is to use the carrier-phase data of two GPS satellites to calculate the relative variations between the two satellites and HY-2D in adjacent epochs and compare them with the relative geometrical distance variations calculated using the coordinates. Therefore, when selecting data for experiments, it is necessary to ensure that two GPS satellites can be observed simultaneously and continuously during this period.
SLR 3D Validation Method
Since SLR range validation can only determine the residuals of the validation in distance and the results are not very intuitive, we propose an SLR 3D validation method on SLR range validation. SLR 3D validation is a method that uses data about SLR stations and the results of SLR range validation to validate the accuracy of orbits. It uses the coordinates of SLR stations provided by the International Laser Ranging Service (https://ilrs.gsfc.nasa. gov/) (accessed on 15 March 2022) (ILRS) and RD orbits of LEO to calculate the geometrical distances from the stations to LEO. Then, make differences between geometrical distances and distances obtained by laser ranging, and the root mean square (RMS) of the results are propagated to three directions according to the law of error propagation, and intuitive results of the validation are obtained.
The latest coordinates of SLR stations (SLRF2014) were downloaded from the ILRS [28], and the coordinates of stations in September 2021 were calculated based on the initial coordinates and the change rate of coordinates provided by SLRF2014. The residuals of distance were obtained according to the principle of laser ranging: where SLR is the laser ranging distance and ρ is the geometrical distance calculated between the SLR station and HY-2D. According to the law of error propagation, the error is transmitted to x, y, and z directions: where δ ∆ is the RMS of the SLR range validation; δ SLR is the RMS error of the accuracy of range data of the SLR station; ∆x, ∆y, and ∆z are the coordinate differences between the SLR station and the HY-2D satellite in the x, y, and z coordinates during the continuous observation period; δ x1 , δ y1 , and δ z1 represent the accuracy of the coordinates of SLR stations in three directions; δ x , δ y , and δ z are the errors in the results of the validation of orbits in three directions. The SLR 3D validation method is an improved method based on the SLR range validation method. The new method can express the accuracy of an orbit in the three directions of x, y, and z, making the results more intuitive.
Quality Analysis of Satellite-Borne GPS Data
The satellite signals received by the GPS receiver consist of the direct signal and a superimposed signal of reflection; the latter can affect the quality of the observations. The impact on the pseudo-range can reach tens of meters, while the impact on the carrier phase is small, at only a few centimeters. Multipath errors are closely related to antenna characteristics, receiver environments, incidence angle, etc. It is difficult to establish a uniform and accurate model, so the multipath effect is considered one of the most important factors affecting the quality of data [29,30].
Based on the linear combination of pseudo-range observations and carrier-phase observations, multipath errors of L 1 and L 2 frequency can be calculated [31]: where M i is the multipath effect of pseudo-range on both frequencies; α = The carrier-phase multipath as well as the elevation angle of G03 as observed from HY-2D were calculated and are shown in Figure 1. Figure 1a,c illustrate statistical assessments of MP1 and MP2 and elevation angles, while Figure 1b,d illustrate the exceptional cases of MP1 and MP2 during the period when the GPS satellite was just locked by the satelliteborne GPS receiver at the beginning of the observations. It can be seen from (a) that MP1 converged rapidly when the satellite elevation angle was greater than 60 • . When elevation angles were less than 60 • , MP1 fluctuated more; the fluctuations were between −2 and 2 m. During the period in Figure 1b, the satellite elevation angles were all less than 45 • , so MP1 fluctuated greatly throughout the period and no convergence occurred. The phenomenon indicated in (b) was a rare occurrence.
At the beginning when the receiver locked GPS and started to collect observations, MP2 maintained a large negative value, and after a short period, it returned to normal. Figure 1c reflects that the change of elevation angle did not have a drastic effect on MP2; when the elevation angle was greater than 40 • , MP2 always fluctuated within a small range of about −0.2 to 0.2 m and within −0.5 to 0.5 m overall. During the period in Figure 1d, the satellite elevation angles were all less than 30 • , so the MP2 fluctuated more, and the overall fluctuation ranged between −2 and 2 m.
The number of satellites observed can reflect the quality of the data and thus the stability of the receiver. Through the quality analysis of HY-2D satellite-borne GPS data for 7 days from DOY 262 to 268, six or more satellites could be observed 78.7% of the time, and fewer than four satellites were observed only 0.15% of the time. O/slps represents the ratio of the actual number of epochs observed over a period to the number of cycle slips epochs; the more cycle slips occurring, the smaller the O/slps will be. Data completeness represents the ratio of the actual number of epochs observed by the GPS receiver to the Remote Sens. 2022, 14, 2477 7 of 17 theoretical number of epochs in a period, while in the actual process of data acquisition various factors may lead to missing data. The data utilization rate is the ratio of the actual number of epochs observed by four or more satellites with dual-frequency observations to the theoretical number of epochs observed. It can be seen from (a) that MP1 converged rapidly when the satellite elevation angle was greater than 60°. When elevation angles were less than 60°, MP1 fluctuated more; the fluctuations were between −2 and 2 m. During the period in Figure 1b, the satellite elevation angles were all less than 45°, so MP1 fluctuated greatly throughout the period and no convergence occurred. The phenomenon indicated in (b) was a rare occurrence.
At the beginning when the receiver locked GPS and started to collect observations, MP2 maintained a large negative value, and after a short period, it returned to normal. Figure 1c reflects that the change of elevation angle did not have a drastic effect on MP2; when the elevation angle was greater than 40°, MP2 always fluctuated within a small range of about −0.2 to 0.2 m and within −0.5 to 0.5 m overall. During the period in Figure 1d, the satellite elevation angles were all less than 30°, so the MP2 fluctuated more, and the overall fluctuation ranged between −2 and 2 m.
The number of satellites observed can reflect the quality of the data and thus the stability of the receiver. Through the quality analysis of HY-2D satellite-borne GPS data for 7 days from DOY 262 to 268, six or more satellites could be observed 78.7% of the time, and fewer than four satellites were observed only 0.15% of the time. O/slps represents the ratio of the actual number of epochs observed over a period to the number of cycle slips epochs; the more cycle slips occurring, the smaller the O/slps will be. Data completeness represents the ratio of the actual number of epochs observed by the GPS receiver to the theoretical number of epochs in a period, while in the actual process of data acquisition various factors may lead to missing data. The data utilization rate is the ratio of the actual number of epochs observed by four or more satellites with dual-frequency observations to the theoretical number of epochs observed.
The multipath error, cycle slips, and data completeness of the 7 days of satellite-borne GPS data were calculated and tabulated in Table 1. The errors of MP1 and MP2 were 0.35 and 0.23 m, respectively, and MP1 was about 1.5 times that of MP2. The HY-2D observation of DOY 265 was missing in the period from 19:49 to 23:27, resulting in a significantly lower than average data completeness and utilization rate. The multipath error, cycle slips, and data completeness of the 7 days of satellite-borne GPS data were calculated and tabulated in Table 1. The errors of MP1 and MP2 were 0.35 and 0.23 m, respectively, and MP1 was about 1.5 times that of MP2. The HY-2D observation of DOY 265 was missing in the period from 19:49 to 23:27, resulting in a significantly lower than average data completeness and utilization rate.
Orbit Determination Strategies and Analysis of Accuracy
HY-2D satellite-borne GPS data are released by NSOAS with a sampling interval of 1 s. The precise ephemeris, precise clock offsets, DCB correction, etc., required for POD are provided by the Centre for Orbit Determination in Europe (CODE). The accuracy validation of orbits is carried out using the SLR data provided by the ILRS and precise ephemeris provided by the International GNSS Service (IGS). Detailed information about the data used in the article is shown in Table 2. LEO satellites are mainly affected by conserved and non-conserved forces, and in order to eliminate the influence of these forces on satellite motion, various mechanical models and pseudo-random parameters were added to the RD orbit determination process [32,33]. XGM2019 (120 × 120 orders) was used to eliminate the Earth's gravity; TIDE2000 and FES2004 [34] to eliminate solid and ocean tides [35]; and DE405 to eliminate the effects of multi-body ingress (Sun, Jupiter, Saturn, Uranus, Neptune Venus, etc.) [36]. The strategies used in the POD of HY-2D are shown in Table 3. Table 3. Reduced-dynamic orbit determination strategy for HY-2D. This study used HY-2D spaceborne GPS data for 7 days from 19 to 24 September 2021 (DOY 262-DOY 268) for POD [37]. Arc lengths of 24 h were selected, and the pulses were estimated every 6 min. The RD orbits were validated using internal and external validation methods. Using observations collected by the HY2 receiver to analyze the accuracy of orbits in internal validation, the carrier-phase residuals were used to assess the accuracy of HY-2D RD orbit in experiments. The external validation methods used data independent of the POD to assess the accuracy of orbits. In addition to the SLR range validation, the proposed three methods including carrier differential validation, inter-satellite differential validation, and SLR 3D validation were also used, and results indicated that these three methods were reliable.
Carrier-Phase Residuals Analysis
The carrier-phase residuals are an important indicator for testing the orbit determination method, reflecting the degree to which observations and the mechanical model are adapted to the actual situation [38]. The quality of the observations, the length of the POD arc, and the parameters to be estimated are all important factors influencing the carrier-phase residuals [39]. The carrier-phase residuals of RD orbits were calculated over 7 days starting from DOY 262 in 2021, and summaries are shown in Table 4.
As shown in Table 4, the carrier-phase residuals fluctuated from −0.0601 to 0.0691 m, with more than 99.4% of the residuals distributed within ±0.025 m, and the fluctuation was relatively gentle. The 7 day RMS values of the carrier-phase residuals fluctuated between 7 and 8.1 mm, and the fluctuation range was only 1.1 mm, which indicated that the strategy and dynamics model we used in POD were reliable and could determine RD orbits with high accuracy. The sequence of carrier-phase residuals depending on the elevation angle is plotted in Figure 2. As shown in Figure 2, the RMS of the residuals was 7.8 mm, indicating the high accuracy of the carrier observations of HY-2D. The carrier-phase residuals were large when the elevation angles were less than 20°; the main reason for the phenomenon was that the receiver's ability to capture the signal was poor when the elevation angles were too low, resulting in poor quality of observations. Therefore, in the process of POD, the low-elevation angle data with poor quality could be deleted by setting the cut-off elevation angle. The cut-off elevation angle set in the process of orbit determination in this study was 5°.
SLR Range Validation
SLR range validation is the method of calculating the difference between laser ranging and geometric distance using the coordinates of the GPS satellites and the coordinates of the stations [40]. The ranging accuracy reaches 1 cm, making it one of the most widely used external methods [41]. When POD accuracy is assessed using SLR range validation, ocean tide, solid-earth tide, and polar tide models are used to eliminate the effect of tidal correction, and station velocities provided by ITRF2014 are used to eliminate the effect of plate motion. The tropospheric delay correction, center of mass correction, general relativity correction, and station eccentricity correction are performed on the SLR observations. This article used the coordinates and speeds of stations provided by ILRS in SLRF2014 as a priori values. During the 7 days from DOY 262 to DOY 268 in 2021, a total of 19 stations participated in tracking HY-2D.
During the experiment, a total of 1407 normal point (NP) data from 19 stations were used to validate the orbits of HY-2D. Due to missing observations from HY-2D at DOY 266, the orbit in this period was calculated by Bernese 5.2 internal interpolation, and the results of POD are not accurate. Therefore, the NP data observed during the missing period needed to be deleted, and a total of 105 NP data were deleted from six stations: 7090, 7810, 7840, 7941, 8834, and 7839. The RMS value of the check and the number of NP data for each station were calculated and are tabulated in Figure 3. As shown in Figure 2, the RMS of the residuals was 7.8 mm, indicating the high accuracy of the carrier observations of HY-2D. The carrier-phase residuals were large when the elevation angles were less than 20 • ; the main reason for the phenomenon was that the receiver's ability to capture the signal was poor when the elevation angles were too low, resulting in poor quality of observations. Therefore, in the process of POD, the low-elevation angle data with poor quality could be deleted by setting the cut-off elevation angle. The cut-off elevation angle set in the process of orbit determination in this study was 5 • .
SLR Range Validation
SLR range validation is the method of calculating the difference between laser ranging and geometric distance using the coordinates of the GPS satellites and the coordinates of the stations [40]. The ranging accuracy reaches 1 cm, making it one of the most widely used external methods [41]. When POD accuracy is assessed using SLR range validation, ocean tide, solid-earth tide, and polar tide models are used to eliminate the effect of tidal correction, and station velocities provided by ITRF2014 are used to eliminate the effect of plate motion. The tropospheric delay correction, center of mass correction, general relativity correction, and station eccentricity correction are performed on the SLR observations. This article used the coordinates and speeds of stations provided by ILRS in SLRF2014 as a priori values. During the 7 days from DOY 262 to DOY 268 in 2021, a total of 19 stations participated in tracking HY-2D.
During the experiment, a total of 1407 normal point (NP) data from 19 stations were used to validate the orbits of HY-2D. Due to missing observations from HY-2D at DOY 266, the orbit in this period was calculated by Bernese 5.2 internal interpolation, and the results of POD are not accurate. Therefore, the NP data observed during the missing period needed to be deleted, and a total of 105 NP data were deleted from six stations: 7090, 7810, 7840, 7941, 8834, and 7839. The RMS value of the check and the number of NP data for each station were calculated and are tabulated in Figure 3. As shown in Figure 3, the 7403 station has the best accuracy of observation with t RMS value of 0.0145 m, but it also has the least number of NP data, with only 7. The wo accurate result of validation is 7941 station, with the RMS value of 0.0578 m and 99 N data. The 7825 station had the most NP data, with 189, and the RMS value was 0.0521 The results of SLR range validation from DOY 262 to DOY 268 in 2021 are summarized Table 5. When summarizing the SLR range validation residuals of 7 days of RD orbits of H 2D, the RMS value was 0.0495 m. The results showed that the overall accuracy of the R orbit of HY-2D obtained by orbit determination was better than 0.05 m.
DORIS Validation
The DORIS orbit check is a method that uses DORIS phase data to validate orbit curacy through the rate of change in distance. At the time of validation, the elevation c off angle was set to 10°, and the DORIS phase data at 10 s intervals were used. Since t time system of the DORIS data was International Atomic Time (TAI) and the RD orbits HY-2D used GPS time, the first step was to unify the time system. The ionospheric del effects were removed using a linear combination to obtain the Ionosphere-Free carri phase observations i LC , and the mean distance change rate () i dt was calculated. T mean distance variation rate calculated from the orbits of HY-2D and corrected by vario models was d . The difference between () i dt and d was calculated to obtain t residuals of DORIS validation [42]. Before using DORIS data to validate, the RD orbit was processed to eliminate t influence of DORIS antenna phase center correction. In order to improve the accuracy the validation result, the solid-earth tide, ocean tide and polar tide models were used tidal correction. The zenith delay was calculated using the Saastamoinen model [43], a As shown in Figure 3, the 7403 station has the best accuracy of observation with the RMS value of 0.0145 m, but it also has the least number of NP data, with only 7. The worst accurate result of validation is 7941 station, with the RMS value of 0.0578 m and 99 NP data. The 7825 station had the most NP data, with 189, and the RMS value was 0.0521 m. The results of SLR range validation from DOY 262 to DOY 268 in 2021 are summarized in Table 5. When summarizing the SLR range validation residuals of 7 days of RD orbits of HY-2D, the RMS value was 0.0495 m. The results showed that the overall accuracy of the RD orbit of HY-2D obtained by orbit determination was better than 0.05 m.
DORIS Validation
The DORIS orbit check is a method that uses DORIS phase data to validate orbit accuracy through the rate of change in distance. At the time of validation, the elevation cut-off angle was set to 10 • , and the DORIS phase data at 10 s intervals were used. Since the time system of the DORIS data was International Atomic Time (TAI) and the RD orbits of HY-2D used GPS time, the first step was to unify the time system. The ionospheric delay effects were removed using a linear combination to obtain the Ionosphere-Free carrier-phase observations LC i , and the mean distance change rate d∆ϕ(t i ) was calculated. The mean distance variation rate calculated from the orbits of HY-2D and corrected by various models was dρ. The difference between d∆ϕ(t i ) and dρ was calculated to obtain the residuals of DORIS validation [42].
Before using DORIS data to validate, the RD orbit was processed to eliminate the influence of DORIS antenna phase center correction. In order to improve the accuracy of the validation result, the solid-earth tide, ocean tide and polar tide models were used for tidal correction. The zenith delay was calculated using the Saastamoinen model [43], and then the tropospheric delay was corrected by mapping the signal propagation path using the Niell Mapping Function (NMF) [44]. The results of the DORIS validation for the 7 days of orbits are shown in Table 6. Table 6. Seven days summary results of DORIS orbit check.
Number of Stations
Min (m/s) Max (m/s) RMS (m/s) The RMS values of the residuals for each station fluctuated between 0.005 and 0.009 m/s. The large errors for the YEMB station and MSPB station were due to a large number of data between 10 • and 15 • elevation angles; in order to retain more data, the elevation angle of the DORIS validation was set to 10 • , because more data at low-elevation angles led to poor inspection results at these stations. Summarizing the statistical results of the 7 days, the RMS residuals value was 0.0085 m/s.
Accuracy Assessment of RD Orbits by Using the Three New Methods
In addition to the common external methods including SLR range validation and validated by DORIS, the carrier difference validation, inter-satellite differential, and SLR 3D validation proposed in this paper are also used for external validation.
Carrier Differential Validation
The carrier differential validation is a method based on the data on DOY 266 in 2021 for the external validation of orbit, and the period for the receiver to observe the GPS satellite was about 30 min. At the beginning and end of the observations, the elevation angle was less than 40 • and the impact of various errors was too large, so data in the middle 1200 epochs with a sample interval of 1 s (20 min) of observations were used for the validation. The data of particular satellites used for the validation does not participate in the POD. The GPS satellite numbers G05, G15, G25, G31 on DOY 266 in 2021 were taken as an example. The results of these satellites are shown in Figure 4. then the tropospheric delay was corrected by mapping the signal propagation path using the Niell Mapping Function (NMF) [44]. The results of the DORIS validation for the 7 days of orbits are shown in Table 6. Table 6. Seven days summary results of DORIS orbit check. The RMS values of the residuals for each station fluctuated between 0.005 and 0.009 m/s. The large errors for the YEMB station and MSPB station were due to a large number of data between 10° and 15° elevation angles; in order to retain more data, the elevation angle of the DORIS validation was set to 10°, because more data at low-elevation angles led to poor inspection results at these stations. Summarizing the statistical results of the 7 days, the RMS residuals value was 0.0085 m/s.
Accuracy Assessment of RD Orbits by Using the Three New Methods
In addition to the common external methods including SLR range validation and validated by DORIS, the carrier difference validation, inter-satellite differential, and SLR 3D validation proposed in this paper are also used for external validation.
Carrier Differential Validation
The carrier differential validation is a method based on the data on DOY 266 in 2021 for the external validation of orbit, and the period for the receiver to observe the GPS satellite was about 30 min. At the beginning and end of the observations, the elevation angle was less than 40° and the impact of various errors was too large, so data in the middle 1200 epochs with a sample interval of 1 s (20 min) of observations were used for the validation. The data of particular satellites used for the validation does not participate in the POD. The GPS satellite numbers G05, G15, G25, G31 on DOY 266 in 2021 were taken as an example. The results of these satellites are shown in Figure 4. As shown in Figure 4, when using the G05 satellite for the validation, its data were removed from the observations so that they were not used in POD and to ensure that the validation method was an external method. In order to illustrate the reliability of carrier differential validation, experiments were carried out 30 times, with only one set of GPS data removed each time. The results of the validation are shown in Table 7. As shown in Figure 4, when using the G05 satellite for the validation, its data were removed from the observations so that they were not used in POD and to ensure that the validation method was an external method. In order to illustrate the reliability of carrier differential validation, experiments were carried out 30 times, with only one set of GPS data removed each time. The results of the validation are shown in Table 7. The results of the G11 and G28 satellites are not listed in the Table 7 because there were no observations of either on DOY 266 in 2021. The RMS value of the residuals for each satellite was better than 0.009 m, and the total RMS value was 0.0068 m. When using each satellite for validation, the fluctuation range of residuals obtained was constant and the mean value was close to 0, indicating that the carrier differential validation method could effectively eliminate the influence of errors on the carrier, so as to obtain reliable accuracy evaluation results.
Inter-satellite Differential Validation
The observations of HY-2D satellite-borne GPS were first processed, and then the data of the satellites involved in the validation were selected according to the distribution of the time series of GPS satellites that were observed. The distribution of GPS satellites observed by the HY2 receiver is shown in Figure 5. The results of the G11 and G28 satellites are not listed in the Table 7 because there were no observations of either on DOY 266 in 2021. The RMS value of the residuals for each satellite was better than 0.009 m, and the total RMS value was 0.0068 m. When using each satellite for validation, the fluctuation range of residuals obtained was constant and the mean value was close to 0, indicating that the carrier differential validation method could effectively eliminate the influence of errors on the carrier, so as to obtain reliable accuracy evaluation results.
Inter-satellite Differential Validation
The observations of HY-2D satellite-borne GPS were first processed, and then the data of the satellites involved in the validation were selected according to the distribution of the time series of GPS satellites that were observed. The distribution of GPS satellites observed by the HY2 receiver is shown in Figure 5. As shown in Figure 5, select data of two or more GPS satellites were observed each hour, so one data set was selected each hour for the experiment, and a total of 24 data sets were selected. Due to the short overlap period between the two GPS satellites, the data of 600 epochs with a sampling interval of 1 s (10 min) in the middle of the overlapping period were taken for experiments.
Two data sets of GPS satellites were selected for validation each hour and the data of these two GPS satellites were removed from the observations used in POD. A total of 24 sets of experiments were performed, and, as far as possible, two different satellites were selected for validation to ensure the reliability of the inter-satellite differential validation method. The results are shown in Table 8. As shown in Figure 5, select data of two or more GPS satellites were observed each hour, so one data set was selected each hour for the experiment, and a total of 24 data sets were selected. Due to the short overlap period between the two GPS satellites, the data of 600 epochs with a sampling interval of 1 s (10 min) in the middle of the overlapping period were taken for experiments.
Two data sets of GPS satellites were selected for validation each hour and the data of these two GPS satellites were removed from the observations used in POD. A total of 24 sets of experiments were performed, and, as far as possible, two different satellites were selected for validation to ensure the reliability of the inter-satellite differential validation method. The results are shown in Table 8.
As shown in Table 8, the RMS of the inter-satellite differential validation residuals was better than 1 cm in most periods, with fluctuations in the range of −0.03 to 0.03 m. The total RMS value of residuals on DOY 266 in 2021 was 0.0106 m, and the residuals of each satellite fluctuated in the same range, with fewer fluctuations, so it can be said that inter-satellite differential validation can be used as a reliable external validation method.
SLR 3D Validation
The results of SLR range validation are relatively single, and the accuracy of the orbit cannot be more intuitively presented. The SLR 3D validation is a method that uses the position and speed of the SLR station and the results of SLR rang validation to obtain errors in the x, y, and z directions according to the law of error propagation.
Stations with more than 50 NP data were selected for the experiment, and a total of nine stations met the experimental requirements. Each selected station provided three consecutive observation periods, and the RMS in the x, y, and z directions were obtained by solving the equations, as shown in Table 9. As shown in Table 9, the errors in three directions of every station were better than 0.042 m, and the 3D RMS was better than 5.8 cm. When combining all stations, the RMS values were 0.0266 m in x-direction, 0.0332 m in y-direction, 0.0269 m in z-direction, and the 3D RMS value was 0.0503 m. The 3D RMS value was close to the result of SLR range validation, which proved that this method is reliable and can more intuitively evaluate the 3D accuracy of orbits based on the SLR range validation.
Conclusions
In this paper, the quality of HY-2D satellite-borne GPS data was analyzed, and the receiver was able to observe six or more navigation satellites more than 78.6% of the time. Compared with the L 2 frequency data, the variation in elevation angle had a greater impact on the L 1 frequency carrier-phase observations. The elevation angles were consistently below 40 • for some of the observations, resulting in a large multipath error and severe fluctuations. The multipath effect, data integrity rate, and cycle slips proved that the HY2 receiver independently developed in China had a good performance and stable operation. Three external methods including carrier differential validation, inter-satellite differential validation, and SLR 3D validation were proposed, and the feasibility of these methods was verified based on the RD orbit of HY-2D. The orbits of HY-2D were precisely determined using the RD method; the carrier-phase residual was used as the internal validation method, and the residual RMS value was 0.0078 m. The DORIS validation and SLR range validation were used as the external validation methods with RMS values of 0.0085 m/s and 0.0495 m, respectively. The proposed SLR 3D validation based on the SLR range validation obtained errors in the x, y, and z directions, and the RMS values were 0.0266, 0.0332, and 0.0269 m, respectively. The results were comparable to the accuracy of SLR range validation but more intuitive. The proposed carrier differential validation and inter-satellite differential validation were mainly carried out using satellite carrier-phase data that was not used in POD. One or two GPS satellites were selected from the satellite-borne GPS data for external validation, and the remaining observations of GPS satellites were used for POD. The RMS values of the carrier differential validation and inter-satellite differential validation were 0.0068 and 0.0106 m, respectively. Experimental results demonstrated that the three proposed methods can be used as external validation methods and that they are reliable. | 2022-05-23T15:12:38.582Z | 2022-05-21T00:00:00.000 | {
"year": 2022,
"sha1": "e66441e02afef5bb6295d1844e4b192037f4ecd0",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2072-4292/14/10/2477/pdf?version=1653134054",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "4a6eaa949ab08a47cfe1351202c31e99b0314921",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
204946785 | pes2o/s2orc | v3-fos-license | 406. Cloning Antibodies Against Kawasaki Disease from Acute Plasmablast Responses
Abstract Background Kawasaki Disease (KD) is a childhood vasculitis, marked by prolonged fevers and coronary artery inflammation/aneurysms in near one-quarter of those untreated. The cause remains unknown; however, epidemiologic and demographic data support a single preceding infectious agent may lead to KD. Plasmablasts (PBs) are a stage of transitional B-cells that lead to plasma cells, the long-lived antibody-producing cells of the bone marrow. After initial infection, peripherally circulating PB populations are enriched for cells with antibodies against the preceding infection. We have recently published data showing children with KD have similar PB responses to children with infections. We sought to define the antibody characteristics, including clonality, of these PBs during KD. Methods We used antibody repertoire next-generation sequencing to characterize memory and PB populations. Additionally, pairing of heavy and light chains was performed with Chromium Single Cell Gene Expression (10x Genomics, Pleasanton, CA) using the Human B cell Single Cell V(D)J Enrichment Kit. Results From subject 24, antibody sequences using VH4-34 and a 19 amino acid length complementarity determining region 3 showed a massive expansion between day 4 and 6 of fever. Chromium single-cell sequencing produced over 946 heavy and light chain paired sequences. Sequence comparison showed 40% of sequences demonstrated markers of clonal expansion, which represented 100 clonal groups. Seven other KD subjects are being processed and comparative analysis will be presented. Conclusion This clonal expansion within plasmablast populations supports that Kawasaki disease is caused by an infection. Antigen targeting of these monoclonal antibodies is currently being explored. Disclosures All authors: No reported disclosures.
Background. Capsular polysaccharide (CPS) of Carbapenem-resistant K. pneumoniae ST258 (CR-Kp) is a potential vaccine target. CPS of these isolates generally falls within 2 homology groups named clade 1 and clade 2. We and others have made antibodies (Abs) that act against clade2 CR-Kp but failed to make therapeutic Abs against clade1 CR-Kp. Previous studies had shown that studying patient's antibody responses could help in identifying suitable candidates for developing immunotherapies. Thus, we sought to identify potential vaccine candidates by investigating the humoral response CPS in CR-Kp-infected patients.
Methods. 24 CR-Kp isolates and corresponding serums were collected from inpatients at Stony Brook Hospital. CPS was isolated and purified by size-exclusion column chromatography from CR-Kp strains 34 (clade 2), 36 (clade 1), and 38 (clade-Other). Anti-CPS Abs in patient's serum were detected by enzyme-linked immunosorbent assay (ELISA) and bulk Abs from positive serum were purified using an affinity column. These Abs were tested for activity against CR-Kp by serum bactericidal and agglutination assays.
Results. 50% of clade2 CR-Kp-infected patients had humoral responses against CPS34. 77% of clade 1-infected patients sera cross-reacted wtih CPS34, but none of them developed Abs against CPS36. Interestingly, 90% of clade1 and 60% of clade 2-infected patients, respectively, showed Abs binding to CPS38. Thus, we selectively purified Anti-CPS Abs from two clade-Other-infected patients and observed that they were cross-reactive with all three CPS. Further, these Anti-CPS Abs agglutinated all tested CR-Kp isolates (34, 36, and 38) when compared with control human IgG (P < 0.005). Additionally, these Anti-CPS Abs promoted killing of clade2 bacteria and inhibited the growth of clade1 bacteria in Ab-mediated serum bactericidal assay. These data elucidate that humoral responses developed in clade-Other CR-Kp-infected patients have therapeutic potential.
Conclusion.
With the unavailability of effective antimicrobials for CR-Kp, approaches like developing novel anti-CPS vaccine could serve as an alternate therapy. Our data suggest that developing immunotherapies targeting CPS38 could potentially provide protection across both clade1 and clade2 bacteria in clinical settings.
Disclosures. All authors: No reported disclosures. Background. Kawasaki Disease (KD) is a childhood vasculitis, marked by prolonged fevers and coronary artery inflammation/aneurysms in near one-quarter of those untreated. The cause remains unknown; however, epidemiologic and demographic data support a single preceding infectious agent may lead to KD. Plasmablasts (PBs) are a stage of transitional B-cells that lead to plasma cells, the long-lived antibody-producing cells of the bone marrow. After initial infection, peripherally circulating PB populations are enriched for cells with antibodies against the preceding infection. We have recently published data showing children with KD have similar PB responses to children with infections. We sought to define the antibody characteristics, including clonality, of these PBs during KD.
Cloning Antibodies Against Kawasaki
Methods. We used antibody repertoire next-generation sequencing to characterize memory and PB populations. Additionally, pairing of heavy and light chains was performed with Chromium Single Cell Gene Expression (10x Genomics, Pleasanton, CA) using the Human B cell Single Cell V(D)J Enrichment Kit.
Results. From subject 24, antibody sequences using VH4-34 and a 19 amino acid length complementarity determining region 3 showed a massive expansion between day 4 and 6 of fever. Chromium single-cell sequencing produced over 946 heavy and light chain paired sequences. Sequence comparison showed 40% of sequences demonstrated markers of clonal expansion, which represented 100 clonal groups. Seven other KD subjects are being processed and comparative analysis will be presented.
Conclusion. This clonal expansion within plasmablast populations supports that Kawasaki disease is caused by an infection. Antigen targeting of these monoclonal antibodies is currently being explored.
Disclosures. All authors:
No reported disclosures. Clinical studies consistently find an increase in the risk of acute coronary syndrome (ACS) in the weeks following pneumonia, although the mechanisms underlying this finding are unknown. ACS most commonly occurs as a result of thrombosis at the site of ruptured atherosclerotic plaques. We hypothesized that the systemic inflammatory response to pneumococcal pneumonia leads to acute localized inflammatory changes within established atherosclerotic plaques, favoring plaque instability and rupture, thereby resulting in ACS.
The Effect of Streptococcus pneumoniae Pneumonia on Atherosclerosis
Methods. Male ApoE-/-mice, a well-established model of atherosclerosis, were fed an atherogenic diet for 7-8 weeks before intranasal infection with Streptococcus pneumoniae or mock infection. Mice were sacrificed 2 or 8 weeks post-infection. Formalin-fixed, paraffin-embedded aortic sinus plaque sections were analyzed to assess markers of plaque vulnerability to rupture. To characterise post-pneumonic plaque macrophage phenotype, aortic sinus plaque cryosections 2 weeks post pneumonia/mock infection were immunostained for MAC-3 to identify macrophage-rich areas. These plaque regions were collected using laser capture microdissection and RNA extracted for microarray analysis.
Results. S. pneumoniae infection was associated with increased aortic sinus atherosclerotic plaque macrophage content (18.1 vs. 8.0%; P < 0.05) at 2 weeks post infection, but no significant difference in aortic sinus plaque burden, plaque smooth muscle or collagen content. There was no significant difference in any of these plaque vulnerability markers at 8 weeks post infection. Microarray analysis of laser capture micro-dissected plaque macrophages identified downregulation of the expression of three genes coding for specific E3 ubiquitin ligases following pneumonia. Pathway analysis identified a significant perturbation in the ubiquitin proteasome system pathway as a result.
Conclusion. In this murine model, pneumococcal pneumonia resulted in increased atherosclerotic plaque macrophage content, a marker of plaque instability, at 2 weeks post infection. Pneumonia may therefore lead to an increased propensity for atherosclerotic plaques to rupture soon after pneumonia, due to infiltration of macrophages into the plaque.
Single-cell Sequencing Identifies Variability in Host Response Among Different Genera of Influenza Viruses
Beth Kristine. Thielen, MD, PhD 1 ; Jaime Christensen 2 ; Anna K. Strain, PhD 2 ; Steven Shen, MD, PhD 1 and Ryan Langlois,
Background. Seroprevalence and surveillance studies indicate that influenza C virus (ICV) infection is common among humans, and initial exposure occurs early in life. ICV often causes milder disease than influenza A and B viruses, but the mechanisms underlying differences in pathogenicity remain poorly understood.
Methods. To compare early events of infection in natural target sites, we cultured primary human tracheal/bronchial epithelial cells under air-liquid interface conditions to allow differentiation. We-infected these cells with human strains of influenza A, B or C virus. Cells were infected at low MOI (0.1) to ensure populations of directly infected cells and uninfected neighboring cells. To compare the early immune response and cell tropism among these viruses, we performed single-cell RNA sequencing of mockand influenza-infected cells. In parallel, we infected cells pretreated with interferon to mimic later rounds of infection after an early immune response is initiated.
Results. Infection of primary cells by all three viruses was confirmed by RT-qPCR of bulk cell lysates. As expected, prior exposure to interferon β results resulted in reduced levels of viral transcripts. At the single-cell level, we identified expression of genes associated with specific cell types, including basal, ciliated and secretory cells. We also identified expression of interferon stimulated genes, but these genes were not homogeneously expressed among all cell subpopulations and varied among cultures infected with different influenza viruses. We also found different patterns in gene expression in cells previously exposed to interferon, suggesting that host environment varies over subsequent rounds of infection.
Conclusion. Single-cell sequencing is an important tool for studying the host response to influenza infection in complex cellular environments such as the respiratory tract, in which cells vary in their susceptibility to infection and antiviral response. Further analysis will characterize differences among directly infected vs. neighboring cells and correlate responses with pathogenicity.
Disclosures. All authors: No reported disclosures. Background. Acute respiratory tract infections (ARI) often resolve without antibiotics. Yet, antibiotics are prescribed in 60-98% of cases despite lack of confirmed bacterial etiology. Antigen, culture and molecular testing identify pathogens; however, do not differentiate colonization from invasive infection. Since antibiotics are often prescribed despite the low prevalence of confirmed bacterial infection in patients with ARI, we analyzed the impact of adding host response biomarkers to the clinical and microbiological evaluation of outpatients with ARI.
Using the Host Response to Reduce Unnecessary Antibiotic Use in Outpatient Acute Respiratory Infections
Methods. A secondary analysis was performed using data from su suspected ARI cohorts derived from two clinical studies. A clinical reference algorithm, which included bacterial culture, respiratory PCR panels for viral and atypical pathogens, procalcitonin, CBC, serology, and Myxovirus resistance protein A (MxA), was used to define invasive infection based on pathogen detection plus host response and classify infections that may benefit from antibiotics. Antibiotics were considered "warranted" if patients exhibited a bacterial-specific host response, with or without bacterial pathogen detection, and a detected bacterial pathogen without a host response was deemed to be colonization and "at risk for antibiotics. " The percentage requiring antibiotics was calculated by dividing the number of patients with a host response for bacteria by the total number of patients at risk for receiving antibiotics (warranted + at risk). A Chisquare test was performed to determine the difference between patients likely to be treated with antibiotics, bacteria detected with or without host response and bacteria detected with a host response.
Conclusion. Host response may aid in differentiating viral infection and bacterial colonization from invasive bacterial infections requiring antibiotics. | 2019-10-24T09:17:18.519Z | 2019-10-01T00:00:00.000 | {
"year": 2019,
"sha1": "71e536d71fa5268a554dae6962c7a6a16485478e",
"oa_license": "CCBYNCND",
"oa_url": "https://academic.oup.com/ofid/article-pdf/6/Supplement_2/S206/30269986/ofz360.479.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a60415184624ce3073dae3a9760c15b38602f4cf",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
233261377 | pes2o/s2orc | v3-fos-license | An Integrated eDiagnosis Approach (IeDA) versus standard IMCI for assessing and managing childhood illness in Burkina Faso: a stepped-wedge cluster randomised trial
Background The Integrated eDiagnosis Approach (IeDA), centred on an electronic Clinical Decision Support System (eCDSS) developed in line with national Integrated Management of Childhood Illness (IMCI) guidelines, was implemented in primary health facilities of two regions of Burkina Faso. An evaluation was performed using a stepped-wedge cluster randomised design with the aim of determining whether the IeDA intervention increased Health Care Workers’ (HCW) adherence to the IMCI guidelines. Methods Ten randomly selected facilities per district were visited at each step by two trained nurses: One observed under-five consultations and the second conducted a repeat consultation. The primary outcomes were: overall adherence to clinical assessment tasks; overall correct classification ignoring the severity of the classifications; and overall correct prescription according to HCWs’ classifications. Statistical comparisons between trial arms were performed on cluster/step-level summaries. Results On average, 54 and 79% of clinical assessment tasks were observed to be completed by HCWs in the control and intervention districts respectively (cluster-level mean difference = 29.9%; P-value = 0.002). The proportion of children for whom the validation nurses and the HCWs recorded the same classifications (ignoring the severity) was 73 and 79% in the control and intervention districts respectively (cluster-level mean difference = 10.1%; P-value = 0.004). The proportion of children who received correct prescriptions in accordance with HCWs’ classifications were similar across arms, 78% in the control arm and 77% in the intervention arm (cluster-level mean difference = − 1.1%; P-value = 0.788). Conclusion The IeDA intervention improved substantially HCWs’ adherence to IMCI’s clinical assessment tasks, leading to some overall increase in correct classifications but to no overall improvement in correct prescriptions. The largest improvements tended to be observed for less common conditions. For more common conditions, HCWs in the control districts performed relatively well, thus limiting the scope to detect an overall impact. Trial registration ClinicalTrials.gov NCT02341469; First submitted August 272,014, posted January 19, 2015. Supplementary Information The online version contains supplementary material available at 10.1186/s12913-021-06317-3.
Keywords: Integrated Management of Childhood Illness, Electronic clinical decision support system, Health care workers' adherence, Burkina Faso Background Currently, more than 75 low-and middle-income countries (LMIC) are implementing the Integrated Management of Childhood Illness (IMCI) strategy on a large scale. However, poor adherence of health care workers (HCWs) to guidelines has often been reported [1,2], likely due to health system limitations, such as lack of training, coordination and supervision, or low availability of essential medicines and equipment [3][4][5][6]. In Burkina Faso, the IMCI strategy was introduced in 2003, but an evaluation conducted in 2011 reported a low coverage of training and poor performance in terms of adherence to guidelines [7].
Recent advances in Information and Communication Technologies (ICT) and the advent of electronic Clinical Decision Support System (eCDSS) could potentially transform health care services in LMICs, for instance by helping HCWs to correctly follow relatively complex charts. However, several reviews reveal the lack of evidence for a scalable and sustainable impact on health indicators [8][9][10][11][12]. In particular, the experience with using such technology to improve adherence to the IMCI guidelines is limited [13][14][15][16][17].
From 2014, Terre des hommes foundation (Tdh), in partnership with the Burkinabe Ministry of Health (MoH), implemented, in primary health facilities of two regions of Burkina Faso, the Integrated eDIagnosis Approach (IeDA), a complex intervention centred on an eCDSS developed in line with national IMCI guidelines, with the objective of improving HCWs' adherence to the IMCI guidelines. Between 2014 and 2017, an evaluation was performed using a stepped-wedge cluster randomised design by an independent team from the London School of Hygiene and Tropical Medicine (LSHTM), United Kingdom, and Centre Muraz, Burkina Faso. The aim of the evaluation was to determine whether the IeDA intervention increased adherence to the IMCI guidelines and improved clinical assessment, classification, prescription, referral and counselling during underfive child consultations in primary health facilities.
Setting
In Burkina Faso, coverage of key effective interventions for preventing child deaths has steadily increased following the adoption of successive public health policies (e.g. free anenatal care, subsidies for child birth and emergency obstetric care, national distribution of insecticide treated nets, Artemisinin-based Combination Therapy (ACT) for treating uncomplicated malaria at facility and community level, expanded program for vaccination). Consequenttly, in 2015, the under-five mortality rate had declined by 56% compared to 1990, from an estimated 202 deaths per 1000 live births in 1990 to 89 deaths per 1000 live births in 2015 [18]. The government is the main health service provider and managed 83% of facilities within the country in 2014 [19]. The country is divided into 13 regions further subdivided into 63 health districts each with one district or regional hospital. In rural areas, primary health facilities, usually run by one or more nurses with the support of health assistants, are the most common point of care and provide a basic package of outpatient services. In 2014, there were 1824 primary health facilities, corresponding to about one facility per 10,000 inhabitants.
The evaluation took place in the Boucle du Mouhoun and Nord regions from September 2014 to November 2017. Of the 11 districts in these two regions, three districts were selected by the implementing agencies to pilot the first versions of the eCDSS in 2010 and were therefore excluded from the evaluation, which was restricted to the eight remaining districts (Fig. 1). In addition to IeDA, a performance-based financing (PBF) intervention was independently implemented in four trial districts (Nouna, Solenzo, Gourcy and Ouahigouya districts). From April 2016, free care for under-five children was also introduced by the MoH in all public facilities [20].
The IeDA intervention
The IeDA intervention comprised five components: 1. An eCDSS provided on tablets to primary health facilities for the management of under-five connsultations: Based on the information recorded by HCWs from the clinical assessment of the child (e.g. body temperature), the eCDSS displays the relevant charts on the screen to guide HCWs through the IMCI national protocol, from the classification (e.g. uncomplicated malaria), through prescription (e.g. first line antimalarial), referral and counselling. During the trial period, several versions were deployed following feedback from users and stakeholders; 2. A six-day training course provided to HCWs on IMCI guidelines and the use of the eCDSS. During the last year of the trial, learning modules with short videos were also available on the eCDSS to support continuous training; 3. A quality assurance coaching system involving team meetings two to four times a year through which health district authorities and HCWs discussed solutions to their local issues (e.g. organisation of care); 4. A supervision system including monthly visits to primary health facilities; 5. A health information system based on data collected through the eCDSS. During the last year of the trial, descriptive dashboards on under-five consultations were developed and shared with the health district authorities and HCWs.
Evaluation design
Since some components of the intervention could only be delivered at the district level, and rolling out the intervention in a phased manner was more practical for the implementing agencies, the evaluation used a stepped-wedge cluster randomised design, with health districts ("clusters") receiving the intervention at different time points in a randomised order.
Nine steps, one every 4 months, were initially planned, with the first step used as baseline (Fig. 2a). However, funding and logistic issues resulted into delayed roll-out and only four out of eight districts with the intervention implemented. The baseline phase included the first two steps, and during each of the next four steps, from step 3 to step 6, a new district implemented the intervention (Fig. 2b). For the purposes of data collection, ten primary facilities with staff trained in IMCI were randomly selected in each district with stratification on the 2013 annual under-five consultations caseload [21]. Eight rounds of data collection were conducted in total (Fig. 2b).
Full implementation in a district was considered to have been achieved when the eCDSS was provided to all primary facilities and when all HCWs had been trained in its use and IMCI guidelines. In some control districts, data were collected after implementation started but before the full implementation was completed, resulting in some "contamination" of these control districts (Fig. 2b).
Randomisation and masking
Randomisation was restricted to ensure intervention and control clusters were balanced with respect to region and the PBF intervention. Details of the randomisation procedure used to allocate districts to receive the intervention have been published elsewhere [21]. Randomization was performed by JL, independently of Tdh. The nature of the intervention precluded formal masking of fieldworkers.
The allocation of the intervention to each district was gradually communicated by the research team to the implementing agencies and the list of surveyed facilities was not communicated to reduce the likelihood that more intensive support was provided to those facilities.
Sample size
The sample size was determined using the method described by Hussey and Hughes [22], assuming a design effect of 2 due to clustering within facilities and a between cluster coefficient of variation of 0.3. With a harmonic mean of ten children seen at each of the ten selected health facility of the eight districts per step (and therefore 100 children per district and 800 children per step), the trial would provide 90% power to detect an increase in any of the primary outcome from 25 to 33% %. With a harmonic mean of only four children seen per facility at each step, the trial would have 98% power to detect an increase from 25 to 40% [21].
Data collection
Data collection was conducted by two teams, each comprising two trained nurses. At each step, all ten selected primary facilities in each of the eight districts, were visited once for data collection. Data were collected for all consultations of children aged 2 months to 5 years old occurring during the research team's visit to the facility. Each visit lasted 2 days or less if the required minimum sample size of children observed per facility was achieved. At each step, the newest intervention district was visited last to maximise the chances that HCWs had learnt how to use the new technology. Each visit was Stepped-wedge design: actual roll-out of the IeDA intervention. Districts shaded in dark green had full implementation of the IeDA intervention. Districts shaded in light green had partial implementation of the IeDA intervention ("contaminated" control districts) notified, by the data collection team, to the facility the day before the visit.
One independent trained nurse observed the consultation and recorded, using a structured and pre-tested observation form programmed into a tablet, the HCW's clinical practices, illness classifications and prescriptions given to the child. Observations were passive, and the observer never intervened during the consultation. Validation data were collected by the second independent trained nurse, who conducted a repeat consultation with the child, using the eCDSS. These validation data were intended to provide a "gold standard" classification for each child. When there were discrepancies between the HCW and the validation nurse, the final management of the child was agreed by discussion between the two of them.
In addition, at each visit, a shortened version of the WHO Service Availability and Readiness Assessment (SARA) questionnaire [23] was completed to document the availability of essential medicines and equipment required by IMCI guidelines. The four nurses recruited for data collection had previously been trained in IMCI by the MoH. The two nurses responsible for observation of consultations had at least 5 years of experience working in a health centre. The two validation nurses had at least 10 years of experience working in a health centre and were also IMCI trainers. In addition, all underwent 2 weeks of training, provided by the main investigators, on the study methods and tools prior to the trial, and benefited from two refresher trainings, provided by Tdh, on IMCI and the eCDSS during the trial.
Outcomes
The evaluation focussed on the adherence to IMCI charts designed for new consultations of children aged 2 months to 5 years old to assess, classify and treat danger signs, cough/difficult breathing, diarrhoea, fever and nutritional status.The evaluation did not consider IMCI charts designed for children who return after an intial consultation. We excluded charts related to HIV and ear problems due to their very low prevalence during the trial period (across all steps and according to the validation nurses, only 0.9% of children classified with HIV infection, and 2.7% of children classified with ear problems). We also excluded the charts related to vitamin A supplementation and vaccination as coverage was high in Burkina Faso. Upon the advice of the trial's scientific advisory committee, for anaemia, only adherence to the clinical assessment task was evaluated due to the difficulty of assessing anaemia reliably when laboratory testing was locally unavailable.
Primary and secondary outcomes are defined in the Additional file 1. Briefly, the primary outcomes included: 1. overall adherence to clinical assessment tasks; 2. overall correct classification ignoring the severity of the classifications (upon the advice of the trial's scientific advisory committee); and 3. overall correct prescription according to HCWs' classifications. The secondary outcomes included: 1. adherence to assessment of danger signs; 2. correct identification of at least one danger sign; 3. overall correct classification accounting for the severity of the classifications; 4. overall correct prescription according to validation nurses' classifications; 5 & 6. overall correct referral or hospitalisation according to HCWs' assessment and to validation nurses' assessment; and 7. overall correct treatment counselling.
Other reported outcomes are: sensitivity and specificity of the HCWs' classifications; over-prescription of antibiotics and antimalarials; overall availability index of essential oral medicines and equipment (Additional file 2).
Analyses
Analyses were performed using Stata version 14. Analysis included all new consultations of children aged 2 months to 5 years old and excluded children who return after an intial consultation for a follow-up consultation. Primary analyses included "contaminated" control districts as control districts based on the intention-to-treat (ITT) principle.
Secondary analyses excluded these districts for the period when they were contaminated.
Descriptiive analyses were performed using individuallevel data and point estimates and confidence intervals for all outcomes were computed accounting for the clustering of observations within districts and facilities using the svy family of commands in Stata.
Comparisons between trial arms and statistical tests to investigate evidence of an intervention effect were performed on cluster/step-level summaries as recommended by Hayes and Moulton [24] for trials with fewer than about 15 clusters per arm to account for the clustered nature of the data. A "vertical" stepped wedge analysis was performed with permutation test using the swpermute command in Stata [25]. This approach analyses each step as a parallel arm trial or, in other words, computes, for each step, one cluster summary per district and one effect estimate and then combines these step-level effect estimates into a weighted average (with the weights proportional to the harmonic mean of the number of clusters in each arm and step). This approach, recommended by Thomson et al. [26], preserves the randomisation and accounts for secular trends. "Horizontal" comparisons, i.e. comparison within a cluster over time (which are non-randomised), do not contribute to the analysis. Applied to our design, across the six steps and the eight clusters, 46 cluster/step summaries were computed (two cluster/step-level summaries were excluded from the analysis due to data lost in two districts at step 6 and 7 respectively) giving six effect estimates which were then combined into a weighted average for each of our outcome.
The above approach was used for all primary and secondaty outomes with the exception of correct identification of at least one danger sign and overall correct referral/hospitalisation. Given the very small number of children with danger signs or severe classifications warranting referral/hospitalisation who contributed to these two outcomes, Fisher's exact test, performed on individual level data and ignoring clustering, was used to test for an intervention effect.
Statistical tests to investigate evidence of a difference between trial arms were only performed on the primary and secondary outcomes to reduce the problem of multiple testing. No formal adjustment was made for multiple testing. Because our ten endpoints are not all independent to each other, applying the Bonferronni correction would be overly conservative (as it assumes that all hypotheses being tested are independent of each other).
Results
After excluding 189 follow-up consultations, data were recorded for 2724 new consultations of children aged 2 months to 5 years old: 686 consultations at baseline, 1343 consultations in control districts and 695 consultations in intervention districts (Fig. 3, Additional file 4).
While the IMCI paper-form was used for 70% (479/ 686) and 68% (918/1343) of the consultations at baseline and in control clusters respectively, the eCDSS was used in nearly all consultations (97%, 674/694) in intervention clusters. The occasional use of the eCDSS at baseline (1%, 8/686) or in the control districts (9%, 120/1343) reflects instances of early roll-out of the eCDSS prior to training.
Gender and age distributions were similar at baseline and by trial arm (Table 1). Based on validation nurses' assessment, the most common classification given to children was malaria (between 53 and 69% of children across baseline and trial arms) ( Table 2). Other common classifications included: diarrhoea with no dehydration (about 27%) and pneumonia (between 16 and 27%). About 45% of children had one classification only and between 33 and 48% had two or more classifications (Table 3).
Adherence to clinical assessment
Across the six IMCI charts, the average percentage of tasks completed by the HCWs was 48% at baseline, 54% in the control districts and 79% in the intervention districts with evidence for a difference between trial arms (cluster-level mean difference = 30%; P-value = 0.002) ( Table 4). For all IMCI charts, HCWs in the intervention districts completed more of the recommended tasks compared to HCWs in the control districts (Table 5). In particular, more of the recommended tasks were completed for assessing danger signs: 95% versus 34% in the intervention and control districts respectively (clusterlevel mean difference = 71%; P-value = 0.002) ( Table 4).
Identification of danger signs
The proportion of children correctly identified, by the HCWs, with at least one danger sign was 67% (16/24) at baseline and 56% (14/25) in the control districts. It appeared to be somewhat higher (75%, 12/16) in the intervention districts but the small number of children with danger signs preclude firm conclusion (cluster-level mean difference = 19%; P-value = 0.322) ( Table 4).
Classification
Ignoring the severity of the classifications, the proportion of children for whom the validation nurses and the HCWs recorded the same classifications was 75% (457/609) at baseline, 73% (767/1049) in the control districts and 79% (450/572) in the intervention districts with evidence for a difference between trial arms (cluster-level mean difference = 10%; P-value = 0.004) ( Table 4). Accounting for the severity of the classifications slightly lowered the proportions of correct classifications (cluster-level mean difference = 9%; P-value = 0.038) ( Table 4).
By IMCI chart, HCWs in the intervention districts correctly classified children having diarrhoea with no dehydration, dysentery and acute malnutrition (severe or moderate) more often than those in the control districts (Table 6). Although based on a small number of children, HCWs in intervention districts also appeared to correctly classify children with severe malaria or severe febrile illness more often than those in control districts.
HCWs in the intervention districts were also less likely to wrongly diagnose pneumonia as being present when it was not: 7% (38/521) versus 19% (209/1113) ( Table 7). For other conditions, false positive diagnoses were rare (< 5%) in both arms.
Prescription
Overall, the proportion of children who received all the recommended prescriptions in accordance with the HCWs' classifications was 76% (465/614) at baseline, 78% (836/1074) in the control districts and 77% (437/ 567) in the intervention districts with no evidence for a difference between trial arms (cluster-level mean difference = − 1%; P-value = 0.788) ( Table 4). According to the validation nurses' classifications, these proportions were 65% (398/610) at baseline, 66% (693/1049) in the control districts and 69% (392/572) in the intervention districts (cluster-level mean difference = 7%; P-value = 0.226). By IMCI chart, correct prescriptions for dysentery were much more common in the intervention districts than in the control districts, as were correct prescriptions for acute malnutrition (severe without complications or moderate) and severe malaria or severe febrile illness, although still infrequent (Tables 8 and 9).
Correct prescriptions for diarrhoea with no dehydration were also higher in the intervention districts compared to the control districts (Table 9).
Over-prescription
According to the HCWs' classifications, the proportion of children who were not in need of an antibiotic but who were actually prescribed one was 11% (77/681) at baseline, 14% (187/1341) in the control districts and 8% (56/694) in the intervention districts (Table 10). According to validation nurses' classifications, these proportions were 18% (123/668) at baseline, 23% (289/1252) in the control districts and 10% (69/676) in the intervention (Table 5) and overprescription was low and similar at baseline and between trial arms: around 2 to 4%.
Treatment counselling
The proportion of caretakers to whom the HCWs mentioned both the number of doses a day and the number Severe persistent diarrhoea 0 - Table 4). For all oral medicines, both the number of doses per day and the number of days were mentioned by the HCWs to a high proportion of caretakers at baseline and in both trial arms (Table 12).
Availability of essential oral medicines and equipment
The average proportion of essential oral medicines that were observed to be available at the health facilities was high: 98% at baseline, 94% in the control districts and 89% in the intervention districts (Table 13). However, deworming treatments, amoxicillin, ORS and multivitamins were less frequently available in the intervention districts compared to the control districts.
With respect to essential equipment, availability at the health facilities was high: 87% at baseline, 87% in the control districts and 91% in the intervention districts. Better availability of electricity and equipment to administer ORS was observed in the intervention districts compared to the control districts.
Explanatory analyses
Comparison of HCWs' performance with and without use of IMCI paper-forms in the control districts In order to assess whether the frequent use of IMCI paper-based form in the control districts had an effect on HCWs performance, primary and secondary outcomes in the control districts were compared between HCWs who were observed to use an IMCI paper-form and those who did not.
Surprisingly, HCWs who did not use an IMCI paperform in the control districts seem to have better assessed danger signs than those who used a form: on average they performed 45% versus 22% of the recommended tasks respectively (Additional file 5). For all other outcomes, HCWs' performance was similar between the two groups.
Agreement between HCWs and validation nurses' clinical assessment
The square root of the mean square errors (RMSE) for the differences in child's weight, height and temperature measurements between HCWs and validation nurses indicate differences of a small magnitude (< 1 kg, < 3 cm or < 1°C) at baseline and in the trial arms (Additional file 6a). Higher RMSE were observed between Severe persistent diarrhoea 1 HCWs and validation nurses' measurements of midupper arm circumference (MUAC) (around 5 mm) and respiratory count (around 9 counts). All differences were fairly balanced between trial arms. With respect to RDT results and caretakers' answers about children's key symptoms, actual agreement between HCWs and validation nurses were high (> 90%) at baseline and in the trial arms (Additional file 6b). The Kappa coefficients indicate that 90% or more of RDT results were in agreement beyond that expected by chance. The Kappa coefficients for caretakers' answers range from 0.60 to 0.88. Table 9 Correct prescription according to the validation nurses' classifications: Proportion of children who received at least all the recommended prescriptions
Baseline
Control arm Intervention arm Severe pneumonia or very severe disease 15 6.7 0.7 43. 6 Severe persistent diarrhoea 0 - All classifications related to malnutrition
Secondary analyses
Excluding "contaminated" control districts for the period when they were contaminated removed a total of 173 consultations from the analysis and made little or no difference to the results (Additional file 7).
Discussion
The IeDA intervention improved substantially HCW's adherence to IMCI's clinical assessment tasks (30% point increase on average across the intervention districts compared to the control districts), including the assessment of danger signs, which led to some overall increase in the proportion of children being correctly classified (around 10% point increase on average across the intervention districts compared to the control districts) but to no improvement in overall proportion of children receiving correct prescriptions. The intervention, however, appeared to have reduced over-prescription of antibiotics by 6 to 13% points.
Achieving correct classification depends, at least in part, on the clinical skills of the HCWs, which may be more difficult to improve than task adherence itself and may have limited the effect of the intervention on correct classification. Recent more advanced clinical charts, also built on electronic tools, such as electronic pointof-care tests (ePOCT) integrating malaria RDT, haemoglobin, pulse oximetry in all febrile patients and other tests (e.g. glucometer, C-reactive protein) in subgroups of them, have led to major improvements in febrile disease classification and a considerable reduction of antibiotic prescription [27].
In addition, using the eIMCI in Burkina Faso, improvements in classifications and prescriptions tended to be observed for less common conditions, such as dysentery and malnutrition, for which HCWs in the control districts performed relatively poorly. The data were also consistent with an improvement in danger sign identification, correct referrals/hospitalisations and Table 11 Over-prescription according to the validation nurses' classifications: Proportion of children who were not in need of a given medicine but who were actually prescribed it There were some notable differences between findings at baseline and in the control arm with respect to prevalence of pneumonia (27 and 16% respectively), malaria (69 and 55% respectively) and anaemia (13 and 7% respectively). At baseline and in the control arm, 33 and 18% of observations respectively occurred from January to March, during the peak of the pneumonia season.
Observations during the malaria season (July to November) were less frequent at baseline (49%) compared to the control arm (61%). However, the higher prevalence at baseline is consistent with the higher proportion of positive RDT: 82% of RDTs were positive at baseline compared to 66% during the control steps. These results may reflect a more intense malaria season during the baseline steps. This could also explain the difference in anaemia prevalence, which is associated with malaria.
Our findings are broadly consistent with the limited evidence available on the effectiveness of eCDSS for improving adherence to IMCI (eIMCI). In 18 primary facilities in four districts of Tanzania, only 21% of children had all ten critical IMCI tasks assessed under paper- based IMCI compared to 71% under eIMCI (p < 0.001) [14]. In two basic health centres in the Kabul province of Afghanistan, only 24% of children underwent a physical examination in line with IMCI at baseline compared to 84% after 1 year of implementation (p < 0.05) [17].
Comparison of HCWs classifications with classifications given by an independent nurse in Tanzania showed that the electronic protocol improved overall correct classification: 83% under paper-based IMCI compared to 91% under eIMCI (p < 0.001) [14]. In Afghanistan, only 35% of children received a treatment in line with HCWs' classifications at baseline compared to 99% after 1 year of implementation [17]. Reduction in over-prescriptions of antibiotic have also been reported using eIMCI in Afghanistan [17] and Tanzania [15]. In Burkina Faso, interviews with HCWs indicated that IeDA was well accepted, in particular with respect to the usefulness of the eCDSS in guiding through the clinical assessment (Blanchet K et al.: Realist evaluation of the Integrated electronic Diagnostic Approach (IeDA) for the management of childhood illness at primary health facilities in Burkina Faso, submitted). In Ghana, South Africa and Tanzania, HCWs reported similar opinions [13,16]. Nevertheless, our realistic evaluation in Burkina Faso also revealed contextual factors that may have limited the effect of the IeDA intervention. First, staff turnover was reported to be common by district managers, in particular in remote rural facilities where most HCWs do not want to spend more than a few years. A visit in July 2017 in all intervention facilities revealed that around a third of HCWs (36%) had been changed within the last 12 months and that a relatively large proportion (36%) of HCWs had not benefited from the eIMCI training (Blanchet K et al.: Realist evaluation of the Integrated electronic Diagnostic Approach (IeDA) for the management of childhood illness at primary health facilities in Burkina Faso, submitted). Second, while supervision and audit with feedback can be effective in improving performance [28][29][30], monthly supervision visits planned under the IeDA intervention in Burkina Faso faced challenges. The district management teams reported limited budget, access to vehicles and time to dedicate to these visits (Blanchet K et al.: Realist evaluation of the Integrated electronic Diagnostic Approach (IeDA) for the management of childhood illness at primary health facilities in Burkina Faso, submitted).
In addition to incomplete coverage of the IeDA intervention, while pressure from children's caretakers, sometimes reported during interviews with HCWs (Blanchet K et al.: Realist evaluation of the Integrated electronic Diagnostic Approach (IeDA) for the management of childhood illness at primary health facilities in Burkina Faso, submitted), may have limited the reduction in over-prescription of antibiotics, the relatively lower availability of some essential medicines, such as amoxicillin and ORS, in the intervention facilities compared to the control facilities may have limited improvement in correct prescriptions for pneumonia, severe acute malnutrition without complications and diarrhoea. Multiple conditions may also have influenced the medicines prescribed. Across baseline and trial arms, about a third or more of children were diagnosed with two or more classifications. In Tanzania, a large know-do gap was observed, and a lack of knowledge was not the only constraint identified for improved performance. HCWs' weak belief in the importance of following guidelines and confidence in their own experience, lack of intrinsic motivation, and physical or cognitive "overload" were also reported, with poor remuneration contributing to several of these factors [31].
Limitations
Some limitations of our evaluation should be acknowledged. First, the "gold standard" classifications were provided by a repeat consultation after the initial consultation and it is possible that the clinical status of some children (e.g. respiratory rate, temperature, current convulsions) may have changed in the interval between the two. Therefore, we should not expect full agreement between HCWs and validation nurses. Our "gold standard" is certainly less than perfect, and this would tend to reduce the apparent magnitude of any improvement in classifications.
Second, it is likely that the behaviour of HCWs was impacted by the fact that they were observed [32]. The high proportion of HCWs observed using IMCI paperforms in the control districts (68% overall) compared to routine practice (less than 8% of under-five consultations in 2012 [33]) suggests that HCWs in this arm were motivated to perform better than usual. Even if HCWs in the control districts who used IMCI paper-forms did not seem to have performed better compared to those who did not use IMCI paper-forms, repeated observations might explain improvements in some indicators from baseline to control steps, for instance adherence to assessment of danger signs (18% at baseline compared to 34% during control steps). Nevertheless, the behaviour of HCWs in the intervention districts may also have been affected by the presence of observers. Therefore, our findings may over-estimate how well HCWs perform in the absence of an observer, but it is difficult to assert whether or in which direction this may have affected the comparison of intervention and control districts.
Third, the initial evaluation design was not followed. In particular, rolling out the intervention to all districts as planned would have led to more data in the intervention arm, which could have strengthen our findings. In addition, the evaluation design could not address the multi-faceted nature of the intervention and evolving version of the eCDSS. It is therefore not possible to distinguish which component of the intervention led to observed improvements or whether improvements were the result of the combination of components.
Lastly, with respect to statistical analyses, multiple comparisons between arms were performed and can increase the overall error in hypothesis testing, so that P-values should be interpreted with caution. The small number of clusters per trial arm precluded using random effects models on individual level data, thus limiting our ability to control for individual child-level factors.
Conclusion
To conclude, the IeDA intervention was well accepted and improved substantially HCW's adherence to IMCI clinical assessment which led to some improvements in overall correct classifications but little or no improvement in overall correct prescriptions. Nevertheless, substantial improvements were observed in correct classifications and prescriptions for dysentery and malnutrition. To some degree, we also observed an improvement in danger sign identification, correct referrals/hospitalisations and management of severe malaria, although small numbers prevent firm conclusions. For the most common conditions, HCWs in the control districts, who may have been influenced by a Hawthorne effect, performed relatively well, limiting the scope to detect an overall impact.
HCWs' practices are complex behaviours that have many potential contextual and intrinsic influences. Lower availability of some essential medicines in the intervention districts was observed and our realistic evaluation concurrently reported staff turnover and incomplete coverage of training and supervision which may have limited the effect of the IeDA intervention on correct classification and prescription. Task adherence may be easier to achieve than correct classifications which require clinical skills. In the context of national scaling up, disparities between regions exist in terms of structures, staff and resources. Nevertheless, complete coverage of the eIMCI training could be achieved by its integration into the initial nursing curriculum. Supervision will inevitably require resources but also management capacity to deal with relationships, organisation culture and HCWs' professional norms, experiences and motivation (Blanchet K et al.: Realist evaluation of the Integrated electronic Diagnostic Approach (IeDA) for the management of childhood illness at primary health facilities in Burkina Faso, submitted). course of the study, and Terre des hommes foundation for their collaboration.
Authors' contributions KB, JJL and SC conceived the project. SoS designed the data collection instruments with inputs from other authors. AS and SeS implemented and supervised the fieldwork. SeS was responsible of data management. JJL and SC developed the analysis strategy, with inputs from SoS and KB. SoS analysed the data and wrote the first draft of the manuscript. All authors reviewed, made inputs to and approved the final paper. KB and SC are the overall guarantors and SoS is the corresponding author.
Funding
The trial was funded by the Bill and Melinda Gates foundation (Grant No. OPP1084359) and the Swiss Agency for Development and Cooperation. The funders of the study had no role in study design, in the collection, analysis, and interpretation of data, in the writing of the report, and in the decision to submit the paper for publication. The corresponding author had full access to all the data in the study and had final responsibility for the decision to submit for publication.
Availability of data and materials
The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request.
Declarations
Ethics approval and consent to participate Ethical approval was granted by the National Health Ethics Committee of the MoH of Burkina Faso (Reference 2014-4-026), and the LSHTM (Reference 7261). Written informed consent was obtained from the HCW and the parent/guardian of all children aged under-5 prior to the observation of the consultation and the repeat consultation. The trial was registered at | 2021-04-17T13:44:17.472Z | 2021-04-16T00:00:00.000 | {
"year": 2021,
"sha1": "c8af494d43b0420facaddd95aedf63dfe1d684ae",
"oa_license": "CCBY",
"oa_url": "https://bmchealthservres.biomedcentral.com/track/pdf/10.1186/s12913-021-06317-3",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c8af494d43b0420facaddd95aedf63dfe1d684ae",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
220797078 | pes2o/s2orc | v3-fos-license | Nanocellulose Films to Improve the Performance of Distance-based Glucose Detection in Paper-based Microfluidic Devices
We report on a simple, cost-effective, instrument-free, and portable distance-based paper device coupled with NFs for the determination of glucose. The analysis reaction is based upon the oxidative etching reaction of silver nanoparticles (AgNPs) in the presence of H2O2 that is produced from glucose after a glucose oxidase (GOx) catalytic reaction leading to a morphological transformation of AgNPs. A color band length of AgNPs is coated on to a detection channel and then etched by H2O2, and these were changed from a purple color to colorless as a correlate of the glucose concentration. To improve the performance of the enzyme immobilization, NFs, which are biocompatible without compromising their structure and biological activity, were then placed onto the sample zone. The naked-eye detection limit was 0.1 mM for 40 min of analysis time. The recoveries of glucose spiked in the artificial urine samples and control urine samples were then verified by our device and were in the acceptable range of 96–100%.
sprayed onto the sample zone and the detection channel, then the enzyme was dropped onto the NFs at the sample zone. Then 1 w/v% CuSO 4 was sprayed down and let to dry off.
dPAD-NFs for the H 2 O 2 and glucose determination
Briefly, a CorelDraw program was used to design the pattern of the device (Fig. S1). The design was printed onto a Whatman No.1 filter using a wax printer (Xerox, ColorQube 8870, Japan). After printing, the device was heated to 120 ºC in an oven for 5 mins and then let to cool down to room temperature. The backside of the device was attached with tape to prevent solution leakage through the device. For the detection of H 2 O 2 , 7 µL of AgNPs were then coated onto the detection zone and allowed to dry. After that 50 µL of the standard solution was dropped onto the sample zone and this flowed along the detection channel by capillary action and thus started reacting to the AgNPs.
For the glucose detection to be an off-line reaction without NFs, 55 µL of the mixture solution containing 45 µL of glucose and 10 µL of GOx/HRP (500 U/mL for GOx and 100 U/mL for HRP in a sodium acetate buffer, pH = 5.1) were added into an Eppendorf tube and incubated at 25 ºC for 30 mins. Then, 50 µL of the mixture solution was dropped onto the sample zone of the device. To improve the detection limit of the glucose, NFs were placed onto the sample zone, and 6 µL of GOx/HRP was coated onto the NFs. Then, 50 µL of standard solution or artificial urine sample without any treatment, was dropped onto the NFs, as shown in Scheme 1.
Urine Samples
The artificial urine samples, which are a substance created to mimic the appearance, the chemical properties, and the composition of human urine, were prepared from water, glycerol, urea, and sodium hydroxide. The control urine sample having known component concentrations was purchased from Bio-Rad Laboratories. All samples were analyzed using our devices without any pretreatment.
Effect of morphology and the size of the AgNPs
Initially, we studied the effect of morphology and the size of AgNPs upon the oxidative etching reaction using H 2 O 2 as a model. The AgNPs in different morphology and sizes were investigated. 30,32 The results showed that the blue AgNPs were hardly reacting to H 2 O 2 (Fig. 1c) due to their triangular morphology ( Fig. S2 (a)). The yellow and orange AgNPs exhibited their slight colorless band lengths (Fig. 1d, 1e) because of their sphere and circular morphology, respectively ( Fig. S2 (b, e)). In contrast, purple AgNPs were etched with H 2 O 2 because it became clear to observe that the highest colorless band length was achieved in the presence of H 2 O 2 (Fig. 1a). Their morphology was nearly hexagonal, which had a high edge, resulting in being easily etched and of optimum size. Consequently, we selected the purple AgNPs for the following experiments.
Additionally, the optical properties of purple AgNPs were confirmed by UV-visible spectroscopy, as shown in Fig. 2. The surface plasmon resonance (SPR) absorption spectra of the purple AgNPs demonstrated the maximum absorbance of AgNPs to be at 495 nm; this was associated with their in-plane dipole plasmon resonance, which correlated with the particle size of the AgNPs. After the addition of H 2 O 2 , the absorbance of the AgNPs gradually decreased and went towards a blue shift. It indicated that the AgNPs were being etched by the oxidation reaction with the H 2 O 2 . The TEM images showed that it contained nearly morphological hexagonal particles and the average size was 60.60±3.03 nm in the absence of H 2 O 2 . After adding H 2 O 2 , the particle size decreased to around 5.89±0.52 nm (Fig. 3). The reaction mechanism of the developed device is that of the redox reaction of H 2 O 2 with hexagonal AgNPs in a neutral medium; the reaction is as follows.
Ag + + e -→ Ag 0 ( 0 = 0.7996 V) H 2 O 2 + 2Ag 0 ⇌ 2Ag + + 2OH¯ as such, the standard reduction potential of these interference substances is lower than that of the Ag + /Ag 0 . Then, the selectivity was also investigated within the mixture of H 2 O 2 and the other interfering substances at the ratio 1:1 (Fig. 4). These results became apparent as a positive or negative error in the colorless band length, but it is less than 10%, compared to that of the response of the H 2 O 2 alone. Therefore, the proposed devices performed as an excellent selectivity tool for H 2 O 2 detection. Under the optimum conditions, the linear range between H 2 O 2 concentration and the colorless band length was then achieved (Fig. S5).
Regarding the lifetime of our devices modified with hexagonal AgNPs, this device for point of care monitoring also needs to be stable for an extended period. We studied the shelf life of the proposed devices that were repeatedly detecting every seven days, in which we put the devices in a black plastic bag and preserved them in the fridge at 4 ºC. The distance signal tends gradually decrease in the long period as a positive or negative error. Indeed the distance signal was less than 20% when compared with that of the distance signal achieved on the first day (Fig. S6). Hence, we can preserve this device for six months. This excellent storage stability of our device is greatly enhanced upon than that of some other color indicators. [16][17][18][19][20] The Application of Distance-based paper devices for the detection of glucose.
The effect of the enzyme activity of GOx reacting to the presence of glucose was also dependent on the pH of the solution, the concentration of GOx, and the reaction time ( Fig. S7-S9). The determination of glucose using the dPADs according to an off-line reaction without the NFs was obtained, as shown in Fig. S10. Glucose can react to oxygen in the presence of GOx through an oxidation reaction and then generate H 2 O 2 ; this is as followed by these reported reactions.
Figure Captions
Scheme 1. Schematic illustration of the dPAD-NFs for the determination of glucose. Fig. 1 | 2020-07-28T13:04:13.578Z | 2020-07-24T00:00:00.000 | {
"year": 2020,
"sha1": "286621892b46a5bc32ac6bf22a321785c6f8786e",
"oa_license": null,
"oa_url": "https://www.jstage.jst.go.jp/article/analsci/36/12/36_20P168/_pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "e5f024d0652f3a2ed911d817b420867694637ca8",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
238417350 | pes2o/s2orc | v3-fos-license | Kinematic Synthesis and Analysis of the RoboMech Class Parallel Manipulator with Two Grippers
: In this paper, methods of kinematic synthesis and analysis of the RoboMech class parallel manipulator (PM) with two grippers (end effectors) are presented. This PM is formed by connecting two output objects (grippers) with a base using two passive and one negative closing kinematic chains (CKCs). A PM with two end effectors can be used for reloading operations of stamped prod ‐ ucts between two adjacent main technologies in a cold stamping line. Passive CKCs represent two serial manipulators with two degrees of freedom, and negative CKC is a three ‐ joined link with three negative degrees of freedom. A negative CKC imposes three geometric constraints on the move ‐ ments of the two output objects. Geometric parameters of the negative CKC are determined on the basis of the problems of the Chebyshev and least ‐ square approximations. Problems of positions and analogues of velocities and accelerations of the PM with two end effectors have been solved.
Introduction
There are technological processes in industry where it is necessary to perform several operations simultaneously or sequentially, for example, in stamping production, in loading and unloading operations. For the simultaneous or sequential execution of several operations, it is advisable to use manipulation robots with many end effectors.
In this paper, a PM with two end effectors is synthesized that can be used to perform reloading operations from one technological equipment to another. This PM with two end effectors replaces two industrial serial robots in the existing production line of cold stamping and it belongs to the RoboMech class PM. The PM, simultaneously setting the laws of motions of the end effectors and actuators, is called the RoboMech class PM [1]. Setting the laws of motion of the actuators monotonously and uniformly but not defining by solving the inverse kinematics problem simplifies the control system and improves dynamics. Replacing two industrial robots with one RoboMech class PM with two end effectors simplifies the control system and increases the productivity and reliability of the technological line.
Since in the RoboMech class PMs simultaneously set the laws of motion of the end effectors and actuators, they work with certain structural schemes and geometric parameters of their links. The existing methods of kinematic analysis and synthesis of mechanisms and manipulators are based on the derivation of loop-closure equations and their study: in kinematic analysis, using known constant geometric parameters of links and variable generalized coordinates, variable parameters characterizing the relative movements of elements of kinematic pairs are determined, and in kinematic synthesis (dimensional or parametric synthesis) for the given positions of the input and output links, constant geometric parameters of the links are determined. Loop-closure equations are derived on the base of vector and matrix methods [2][3][4][5][6][7][8][9][10], and the theory of screws [11][12][13], which are leads to polynomials of higher degrees. Then, examining the resulting polynomials using computers, depending on the assigned tasks, the kinematic analysis or synthesis is performed. McCarthy in his papers [14,15] shows the close relationship between the kinematics, synthesis, polynomials, and computations in the 21st century. In the considered approach of kinematic analysis and synthesis of mechanisms and manipulators, it is rather difficult to obtain the polynomials; moreover, with the complication of the structures of mechanisms and manipulators, the formation of polynomials becomes more complicated and their degree increases. Performance analysis and applications of the PMs and robots are also presented in [16][17][18][19][20][21].
In this paper, kinematic synthesis of the PM with two end effectors is carried out on the basis of a modular approach [22,23], according to which PMs, regardless of their complexity, are formed by connecting the output objects (end effectors) with a base using closing kinematic chains (CKCs), which are structural modules. CKCs can be active, passive, and negative, which have positive, zero, and negative DOFs, respectively. The active and negative CKCs impose geometric constraints on the motions of the output objects, and passive CKCs do not impose geometric constraints. The representation of PMs from separate structural modules simplifies the methods of their investigation.
Kinematic Synthesis of the PM with Two Grippers
A PM with two end effectors can be used in a cold stamping technological line for reloading operations between two hydraulic presses [24]. Figure 1 shows a structural scheme of the PM with two end effectors in two positions. In the first position (Figure 1a), the first gripper 1 P in position 1,1 P takes the workpiece after processing in the first hydraulic press for delivery to the store. At this time, the second gripper 2 P in position 2,1 P takes the previous workpiece processed in the first hydraulic press for delivery to the second hydraulic press for further processing.
In the second position (Figure 1b), the first gripper 1 P in position 1, N P delivers the workpiece to the store and the second gripper 2 P in position 2, N P delivers the previous workpiece to the second hydraulic press. The cycle is then repeated. The considered positioning PM with two end effectors is formed by connecting two output objects (grippers Р1 and Р2) with a base using two passive and one negative CKC in the following sequence. First, the grippers Р1 and Р2 are connected to the base using passive CKCs ABC and DEF with revolute kinematic pairs, respectively, which have two degrees of freedom. Since passive CKCs АВС and DEF have two degrees of freedom, they can reproduce the given laws of motion of the output points 1 Р and 2 Р . Then, to form a single movable PM with two end effectors, we connect the links ВC and EF of the passive CKCs АВC and DEF with the base using a negative CKC GHI with three negative degrees of freedom. Figure 2 shows a block structure of the formed PM with two end effectors. Since the passive CKCs do not impose geometric constraints on the movements of the output points and C F , the vectors of the synthesis parameters 1 2 and р р are varied by the generator of LP sequence [25] to satisfy the constraints of the negative CKC IDH.
In this case, the following conditions should be fulfilled: where: The variable distances Let us consider the parametric synthesis of the negative CKC GHI with three negative degrees of freedom, determined by the Chebyshev formula [26]: where n is number of links, 5 p is the kinematic pairs of the fifth class.
To do this, we preliminarily determine the positions of links 2 and 4 of the passive CKCs ABC and DEF by the equations: where: where (2) i H x and (2) i H z are the coordinates of the joint H in the local coordinate system H H X Z are the coordinates of the joints G and H in the absolute coordinate system OXYZ, which are determined by the equations: (2) 2 2 (2) 2 2 cos sin , sin cos where: The geometric meanings of Functions (16)- (18) are the deviations of the coordinates of the joints H and G from circles with radiuses , , HG GI HI l l l in the relative motion of the plane 4 4 E х z and in the absolute motion of link 5.
After replacing the synthesis parameters of the form: (2) (4) 4 1 (2) (4) 5 2 , , Functions (16)- (18) are expressed linearly in the following vectors of synthesis parameters where: Furthermore, the synthesis parameters of the negative CKC GHI are determined on the basis of the problems of Chebyshev and least-square approximations [16,17].
Kinematic Analysis of the PM with Two Grippers
In the kinematic analysis of the PM with two end effectors (Figure 4) for the given geometric parameters of the links and the input angle 1i , it is necessary to determine the positions and analogues of the velocities and accelerations of the links, including the output points C and F. The considered PM with two end effectors has the structural formula: i.e., it contains two dyads II(2,5) and II (3,4) .
According to the structural Formula (40), first, a kinematic analysis of the dyad II(2,5) is carried out, and then of the dyad II(3, 4) .
Kinematic Analysis of the PM with Two Grippers
Let us derive a vector BGI loop-closure equation: where: Transfer 2 i BG l e to the right side of Equation (41) and square both sides. As a result, we obtain: Next, we define: To solve the problem of the positions of the dyad II (3.4), we derive a vector DEH loop-closure equation: where: Transfer 4 i EH l e to the right side of Equation (48) and square both sides. As a result, we obtain: Next, we define: Coordinates of the output points C and F in the absolute coordinate system OXYZ are determined by the equations:
Analogues of Velocities and Accelerations
To determine the analogues of the angular velocities of the PM with two end effectors, we derive the vector ABGI and IHED loop-closure equations: and and project them on the axes OX and OZ of the absolute coordinate system OXYZ From the system of Equation (61), we determine the analogues of the angular velocities 2 i and 5 i where: Substituting the obtained values of the angular velocity analogue 5 i into the system of Equation (62), from this system, we determine the angular velocities analogues 3 i and where: Projections of the linear velocities analogues of the output points C and F on the axis of the absolute coordinate system OXYZ are determined by differentiating Equations (55) and (56) where the projections of the linear velocity analogues of the joints B and E are determined by differentiating Equations (44) and (53) with respect to the generalized coordinate 1 i To determine the angular acceleration analogues of the links, we differentiate the systems of Equations (61) and (62) with respect to the generalized coordinate 1 i ) ) 3 3 cos 0 From the systems of Equation (69), we determine the angular accelerations analogues 2 i and 5 i where: Substituting the obtained values of the angular velocity analogues 2 i and 5 i into the system of Equation (70), from this system, we determine the angular velocities analogues 3 i and 4 i where: Projections of the linear velocities analogues of the output points C and F on the axis of the absolute coordinate system OXYZ are determined by differentiating Equations (65) and (66) Table 1 shows N = 11 positions of the grippers 1 P and 2 P of the PM with two end effectors 3D CAD model of the synthesized PM with two grippers is shown in Figure 5. Positions and modules of the velocities and acceleration analogues of the synthesized PM grippers 1 P and 2 P are also presented with the graphical plots in Figures 6-8.
Conclusions
Kinematic synthesis and analysis of the PM with two end effectors have been carried out. In the kinematic synthesis according to the given laws of motions (or positions) of two end effectors, the structural scheme, and geometric parameters of links of the synthesized PM are determined. The structural scheme of this PM is formed by connecting two output objects (end effectors) and a base using three CKCs: two passive and one negative CKC. Passive and negative CKCs are structural modules from which the PM is formed. Passive CKCs are two movable serial manipulators, and the negative CKCs is a threejointed link. Serial manipulators (passive CKCs) do not impose geometric constraints on the movement of the output objects, and the three-jointed link (negative CKC) imposes three geometric constraints. Therefore, the geometric parameters of the links of the negative CKCs are determined, and the geometric parameters of the links of the passive CKCs are varied depending on the imposed geometric constraints of the negative CKC. Kinematic synthesis of the negative CKC was carried out on the basis of the Chebyshev and least-square approximations. Since the structure of the synthesized PM consists of two dyads, the position analysis is solved analytically. Analogues of angular velocities and accelerations are determined from two systems of linear equations obtained by differentiating the loop-closure equations with respect to the generalized coordinate.
Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Not applicable.
Data Availability Statement:
The data presented in this study are available on request from the corresponding author.
Conflicts of Interest:
The authors declare no conflict of interest. | 2021-09-27T20:46:50.824Z | 2021-08-03T00:00:00.000 | {
"year": 2021,
"sha1": "69f6892bb2a8fccf672510976e0b62aee2d7ce81",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2218-6581/10/3/99/pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "8eaca4333e8bead9c9c02be450e9a4a5a5ad1849",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
15680794 | pes2o/s2orc | v3-fos-license | Intermediate risk of multidrug-resistant organisms in patients who admitted intensive care unit with healthcare-associated pneumonia
Background/Aims: Healthcare-associated pneumonia (HCAP) was proposed asa new pneumonia category in 2005, and treatment recommendations includebroad-spectrum antibiotics directed at multidrug-resistant (MDR) pathogens.However, this concept continues to be controversial, and microbiological data arelacking for HCAP patients in the intensive care unit (ICU). This study was conductedto determine the rate and type of antibiotic-resistant organisms and theclinical outcomes in patients with HCAP in the ICU, compared to patients withcommunity-acquired pneumonia (CAP) or hospital-acquired pneumonia (HAP). Methods: We conducted a retrospective cohort analysis of patients with pneumonia(n = 195) who admitted to medical ICU in tertiary teaching hospital fromMarch 2011 to February 2013. Clinical characteristics, microbiological distributions,treatment outcomes, and prognosis of HCAP (n = 74) were compared tothose of CAP (n = 75) and HAP (n = 46). Results: MDR pathogens were significantly higher in HCAP patients (39.1%) thanin CAP (13.5%) and lower than in HAP (79.3%, p < 0.001). The initial use of inappropriateantibiotic treatment occurred more frequently in the HCAP (32.6%) andHAP (51.7%) groups than in the CAP group (11.8%, p = 0.006). There were no differencesin clinical outcomes. The significant prognostic factors were pneumoniaseverity and treatment response. Conclusions: MDR pathogens were isolated in HCAP patients requiring ICU admissionat intermediate rates between those of CAP and HAP.
INTRODUCTION
Pneumonia is one of the most common infectious diseases requiring admission to the intensive care unit (ICU) for medical treatment.With an aging population, the number of patients who receive care at facilities other than hospitals, such as long-term healthcare facilities, assisted-living environments, or rehabilitation facilities are increasing.Therefore, the traditional classifications for pneumonia based on the patient's location before admission such as community-acquired pneumonia (CAP) or hospital-acquired pneumonia (HAP) needed to be updated [1,2], consequently, a new term, healthcareassociated pneumonia (HCAP) was introduced by the Infectious Diseases Society of America (IDSA) and the American Thoracic Society (ATS) in 2005 [3].
Patients who develop HCAP are more similar to hospitalised patients than to independently living community-based patients, in that they have a greater burden of comorbidities, including cancer, chronic kidney disease, heart disease, chronic obstructive lung disease, immunosuppression, dementia, and impaired mobility [1, 3,4].These diverse spectra of HCAP patients may result in varied epidemiology and patient-specific risks for antibiotic-resistant pathogens [5][6][7].
To address this, the IDSA/ATS guidelines recommend broad empirical antibiotic therapy followed by culture-guided de-escalation for patients with HCAP [3].However, despite an excellent negative predictive value (96%), the IDSA/ATS criteria have a low positive predictive value (18%) for differentiating a true infection or colonization with multidrug-resistant (MDR) bacteria in patients with HCAP who admitted to the ICU [8].Therefore, the adherence to these guidelines is not required in all cases and is able to result in the overuse of antibiotics [9].Moreover, the current approach to HCAP treatment is also in the need of revision [10][11][12].
Herein, we tried to determine the differences in the presence of antibiotic-resistant organisms and clinical outcomes in HCAP patients who need ICU care, compared with CAP and HAP patients
Study subjects and design
From March 2011 to February 2013, we conducted a prospective cohort in a 16-bed medical ICU and a retrospective analysis of patients who required an ICU admission for pneumonia.A clinical diagnosis of pneumonia required the presence of new radiographic infiltrates and at least 2 of the following clinical criteria: fever (> 38°C) or hypothermia (≤ 35°C), new cough with or without sputum production, pleuritic chest pain, dyspnoea, or altered breath sounds on auscultation.We excluded patients with a documented do-not resuscitate order.The decision of admission to ICU was done in the case of who was required for close monitoring with septic shock under vasopressor or acute respiratory failure requiring intubation and mechanical ventilation [3].
We define HAP as pneumonia that developed after being hospitalised for > 48 to 72 hours and HCAP as pneumonia that also met at least 1 of the following criteria: (1) recent history of hospitalisation for ≥ 2 days within 90 days of the infection; (2) residence in a nursing home or long-term care facility; (3) recent intravenous antibiotic therapy, chemotherapy, or wound care within 30 days prior to the current infection; or (4) attendance at a haemodialysis clinic [3].Patients with pneumonia who did not meet any of the criteria for HCAP or HAP were identified as having CAP.We compared clinical characteristics, pneumonia severity, the distribution of pathogens, and outcomes between the three groups (CAP, HCAP, and HAP).If patients admitted to the ICU for pneumonia ≥ 2 times during one hospital admission, only the first event of pneumonia was included.The Institutional Review Board Committee of Seoul National University Bundang Hospital waived the informed consent in this study (No.B-1105/127-001).
Microbiological studies
At the day of ICU admission, microbiological studies were conducted using two sets of blood culture samples, gram staining and culture using the transendotracheal aspirate or sputum from patients without intubation, and when available, a bronchoscopic lower respiratory tract culture that was obtained by bronchoscopy at the bedside of ICU.Obtained samples were cultured in a semi-quantitative manner.An etiological diagnosis was made when a respiratory pathogen was isolated from a sterile specimen, a pneumococcal antigen was detected in urine, the antibody titers for an atypical pathogen increased 4-fold or converted to positive, or a predominant micro-organism was isolated from adequate sputa (> 25 neutrophils and < 10 squamous epithelial cells per low-power field) or bronchial washing or alveolar lavage fluids with compatible gram staining results.Methicillin-resistant Staphylococcus aureus (MRSA), drug-resistant strains of Pseudomonas aeruginosa, Acinetobacter species, Stenotrophomonas maltophilia, and extended-spectrum β-lactamase (ESBL)-producing Enterobacteriaceae were considered to be MDR pathogens, as previously reported [13].
Antibiotic therapy
Empirical antibiotic therapy was defined as the use of www.kjim.orghttp://dx.doi.org/10.3904/kjim.2015.103any antibiotics for > 48 hours during the first 3 days of admission.Broad-spectrum antibiotics were defined as the use of any antibiotics that included anti-pseudomonal β-lactamase, vancomycin, or carbapenem.
Antibiotics therapy was initiated after at least blood culture samples were done because of severe condition requiring admission to ICU in basic accordance with the ATS/IDSA guideline [3].However, the detailed antibiotic regimen complied with the attending physician's choice taking into consideration patient's risk factors and the severity of the disease.The appropriateness of antibiotic therapy was analysed for all cases with an etiological diagnosis according to susceptibility test criteria for lower respiratory tract pathogens.Antibiotic therapy was classified as being inappropriate if the initially prescribed antibiotics were not directed at the identified pathogens, and treatment failure was defined as death during the initial treatment or poor treatment response.Poor treatment response defined as a change in the empirical antibiotics from the initial agents within the 7th day of the ICU admission.
Statistical analysis
To compare the differences between the groups, Fisher exact tests were used for categorical variables, and the two-tailed t test, analysis of variance, or Mann-Whitney test was used for continuous variables, as appropriate.Statistical significance was established at a two-tailed p = 0.05.All analysis was conducted using SPSS version 18.0 (SPSS Inc., Chicago, IL, USA).
Baseline characteristics
During the study period, 195 patients that required ICU care for pneumonia were eligible for the study: 75 with CAP (38.1%), 74 with HCAP (37.6%), and 46 with HAP (24.4%) (Table 1).Distribution of HCAP were described in Table 2, Supplementary Table 1, and HAP in Supplementary Table 2. Patients with HCAP were significantly more likely to have comorbidities, particularly cerebrovascular disease (55.4% vs. 30.7%,p = 0.009) and chronic kidney disease (16.2% vs. 1.3%, p = 0.002), than CAP patients.Leukopenia was also significantly more common in patients with HCAP than in those with CAP (23.0% vs. 5.3%, p = 0.005).There were no significant differences in pneumonia severity measured using the confusion, urea, respiratory rate, age ≥ 65 (CURB-65) criteria (≥ 3) and pneumonia severity index (PSI; high-risk class).Disease severity according to the Acute Physiology and Chronic Health Evaluation II (APACHE II) and Sequential Organ Failure Assessment scores was similar across the three groups.
In all three groups, S. aureus was the most common gram-positive pathogen.Of the S. aureus pathogens, MSSA was detected significantly more often in the CAP group than in the HCAP and HAP groups (p = 0.001).MRSA was detected at comparable rates in the CAP and HCAP groups and significantly more often in the HAP group (p = 0.002).Of the gram-negative pathogens, HCAP and HAP patients had significantly higher rates of ESBL-producing Enterobacteriaceae than the CAP patients (p = 0.015).
The prevalence of MDR pathogens in the HCAP group (39.1%) was significantly higher than in the CAP group (p < 0.005) and lower than that in the HAP group (p = 0.001) (Fig. 1).Inappropriate initial antibiotic treatment was administered significantly less often in the CAP group (p = 0.034) than in the HCAP and HAP groups (p = 0.146).
Antimicrobial treatment and clinical outcomes
In all three groups, the majority of the patients received combination antibiotic therapy as the initial treatment (CAP 86.7%, HCAP 78.4%, and HAP 71.7%) (Table 4). http://dx.doi.org/10.3904/kjim.2015.103 Among the combination therapies, antipseudomonal β-lactamase in combination with fluoroquinolone was the most frequently used in HCAP and HAP (39.2% and 34.8%).β-Lactamase in combination with fluoroquinolone was the most common in CAP (34.6%).Among the monotherapies, antipseudomonal β-lactamase was the most frequently used in three groups (CAP 6.7%, HCAP 12.2%, and HAP 17.4%).Broad-spectrum antibiotics were administered to the CAP patients significantly less often than to the HCAP and HAP patients (p < 0.05).
There were also no significant differences in clinical outcomes, including ICU mortality, 28-day mortality, length of ICU stay, and the duration of mechanical ventilation (Table 5).The multiple logistic regression anal- ysis resulted in a significantly increased odds of mortality associated with the acute physiologic PSI score and treatment response (Table 6).
DISCUSSION
Previous studies have compared bacteriological differences and clinical outcomes between HCAP and CAP, or between HCAP and HAP [1, [14][15][16][17][18].A study compared HCAP with CAP and HAP at the same time without mention of ICU admission [19].To our knowledge, this is the first report to compare the microbiologic epidemiology and clinical outcomes in patients admitted to the ICU with HCAP, to those with CAP and HAP.Three groups of pneumonia had similar baseline characteris- tics and pneumonia severity.
We identified the rate of MDR pathogens in the patients with HCAP was less than that in the patients with HAP and greater than that in patients with CAP as per the IDSA/ATS guidelines.However, the distribution of pathogens in the patients with HCAP was different from previous studies.Most common pathogen in HCAP reported previous studies was S. aureus or S. pneumonia [1,9,14,20].In our study, K. pneumoniae (45.6%) was the most common pathogen.Consequently, ESBL-producing K. pneumoniae was also the most common MDR pathogen.The incidence of MRSA in the HCAP group (19.6%) was similar to that in the CAP group (8.1%, p = 0.221) and lower than that in the HAP group (44.8%, p = 0.036).Similarly, one other study of microbial characteristics of HCAP and HAP in Korea showed similar microbial distribution.K. pneumoniae was the most common pathogen in HCAP group.The incidence of MRSA was lower than that of HAP group [21].The explanation for these differences is not clear.A study in residents of long-term care facilities reported that the most common pneumonia pathogens were gram-negative bacilli (18%) [22].Pop-Vicas and D'Agata [23] noted the factors that were independently associated with the isolation of MDR gram-negative bacilli in these patients were an age > 65 years, prior antibiotic therapy for > 2 weeks, and residence in a long-term care facility.These are similar to the definition for HCAP.
The rate of initial administration of broad-spectrum antibiotics in the patients with HCAP and HAP were higher than those in patients with CAP as per the IDSA/ATS guidelines [3].Despite the more common use of broad spectrum antibiotics in the HCAP and HAP groups, the initial antibiotic treatment was inappropriate more frequently in the HCAP and HAP groups than in the CAP group.This difference may be explained by the differing prevalence of MDR pathogens between the groups.ESBL-producing K. pneumoniae was common in HCAP, whereas Pseudomonas spp.were less common in our study.According to our study, regional antimicrobial prescribing guidelines should contain the diversity in regional trends in microbial drug resistance.
Generally, the clinical course is poorer and the length of hospital stay is prolonged in patients with HCAP, compared to patients with CAP [1, 6,8].Our study failed to show a signifi cant difference in the clinical outcomes among the three groups because of the disease severity who were requiring ICU care by itself.Our study population was characterized by high severity of disease, approaching PSI stage IV and V disease.Overall mortality at 28 days was more than 20% in all of three groups.In one previous study that reported poorer clinical outcomes in patients with HCAP than that in those with CAP for low-risk patients, the mortality rates were not different for the high-risk patients [24].Especially, we did demonstrate that ICU mortality was associated with pneumo nia severity.With similar disease severity, patients with CAP may demonstrate similar mortality as patients with HCAP or HAP, regardless of the presence of MDR pathogens.
Treatment response was another important factor for ICU mortality.Despite significant gradual differences among the groups in the rate of MDR pathogens and the presence of a high rate of broad-spectrum antibiotic use and inappropriate treatment in our study, there were no differences in the clinical outcomes including hospital length and mortality.Physician choice the initial antibiotics considering the risk factor of MDR pathogen or disease severity at the time of admission.There were no definite criteria for evaluating treatment response under treatments.It is critical to identify patients at risk for non-response pneumonia using defined criteria to institute early appropriate therapy.El Solh et al. [17] evaluated treatment failure of severe pneumonia including nursing home residents.However, no specific definition of treatment failure was used.The parameters such as the PSI score, CURB-65, or APACHE II evaluate the severity of pneumonia at the time of admission and not response to treatment.We evaluated treatment response with definite criteria; a change in the empirical antibiotics from the initial agents within the 7th day of the ICU admission.The appropriate stewardship of antibiotics considering the treatment response could be more important factor influencing better clinical outcomes in this population.The present study analysed data retrospectively within a single institution, which is a limitation.However, data were collected from a prospective cohort of patients who required ICU admission, and uniform methods were used to detect pathogens.Sputum and blood samples were evaluated for all of the patients, and > 60% of the patients underwent a bronchoscopy to obtain specimens.Our successful pathogen identification rate of 57% (112/195) was high compared to the 20% to 50% reported in other prospectively designed studies [1,4,23,24].Second, prior antibiotic use in the HCAP group could not be accurately estimated due to insufficient information in the medical records from other clinics.In Korea, there are a wide variety of long-term health care facili-ties including assisted-living, rehabilitation, haemodialysis, and convalescent hospital facilities where antibiotics could be administered.Therefore, the number of patients in the HCAP subgroup (Supplementary Table 1) that were identified by the receipt of intravenous antibiotic therapy within 30 days of a current infection could have been underestimated.Finally, we excluded subsequent pneumonia events from patients who experienced ≥ 2 events in the same admission, potentially underestimating the number of HAP patients.
In conclusion, MDR pathogens were isolated in HCAP patients requiring ICU admission at intermediate rates between those of CAP and HAP.However, there were no significant differences among type of pneumonia in the clinical outcomes, including mortality.Infusion therapy, such as intravenous antibiotic therapy, chemotherapy, or wound care, within 30 days of a current infection.c Residence in a nursing home or long-term care facility.d Regular attendance at a dialysis clinic, including hemodialysis and peritoneal dialysis.e ESBL producing Enterobacteriae include Klebsiella pneumoniae, Escherichia coli, Enterobacter spp.f ICU free days refers to the period from ICU discharge to hospital discharge.
b
Broad spectrum antibiotic use was defined as the use of any antibiotics including antipseudomonal β-lactamase or vancomycin or carbapenem.c p < 0.05 when compared with CAP.d Treatment failure means death during initial treatment or change of empirical antibiotics from the initial agents to others on the 7th day from medical intensive care unit admission.
Table 1 . Baselines characteristics of the study groups
Chronic lung disease includes chronic obstructive lung disease and structural lung diseases, such as bronchiectasis.
Table 2 . Distribution of HCAP (n = 117)
cResidence in a nursing home or long-term care facility.d Regular attendance at a dialysis clinic, including hemodialysis and peritoneal dialysis.
Table 3 . Distribution of the isolated pathogens in CAP, HCAP, and HAP patients
a Numbers include mixed population of pathogens (4 in CAP, 7 in HCAP, and 9 in HAP).b ESBL producing Enterobacteriae include Klebsiella pneumoniae, Escherichia coli, Enterobacter spp.c MDR Pseudomonas spp.means resistant Pseudomonas aeruginosa.d p < 0.05 when compared with CAP.
Table 4 . Initial antibiotic treatment
a Quinolone was levofloxacin.
Table 5 . Clinical outcomes of study populations
a A total of 92 patients were successfully weaned from mechanical ventilation in the ICU.b ICU free days refers to the period from ICU discharge to hospital discharge.http://dx.doi.org/10.3904/kjim.2015.103
Table 6 . Results of the logistic regression analysis to determine the factors associated with mortality
Change of empirical antibiotics from initial agents to others within the 7th day.
a Hospitalization in an acute care hospital for 2 or more days within 90 days of the infection.b | 2016-08-09T08:50:54.084Z | 2016-03-11T00:00:00.000 | {
"year": 2016,
"sha1": "c308ac76677b865bea7f09457fc988edb70f0acd",
"oa_license": "CCBYNC",
"oa_url": "http://kjim.org/upload/kjim-2015-103.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "2add48cf6fab683eaf565c17c97019c44512b143",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
259188058 | pes2o/s2orc | v3-fos-license | A shorter proof of the Marker-Steinhorn Theorem
By analyzing o-minimal definable preorders we give a proof of the Marker-Steinhorn Theorem [MS94] that shortens the original proof.
In particular, for any Ndefinable subset X of N n , the externally definable set X ∩M n is M-definable.
The Marker-Steinhorn Theorem was first proved in [MS94] by the namesake authors. Tressl [Tre04] gave a short non-constructive proof for o-minimal expansions of real closed fields via valuation theory. Chernikov and Simon [CS15] derived the theorem as a corollary of a more general theorem around stable embeddedness in the NIP setting. Walsberg [Wal19] proved the theorem for o-minimal expansions of ordered groups by analizing o-minimal definable linear orders.
We present a proof of the Marker-Steinhorn Theorem that shortens the original proof, avoiding its treatment by cases while also circumventing the use of regular cell decomposition. Our approach, which bears some similarity with Walsberg's [Wal19] in the group setting, is to use o-minimal cell decomposition to reduce the problem to showing that any cut in an M-definable preordered set that is realized in a tame extension is M-definable.
Conventions
Throughout we work in an o-minimal structure M = (M, <, . . .) and an elementary extension N = (N, <, . . .). Throughout n, m and l are natural numbers.
Any formula is in the language of M without parameters unless stated otherwise. We use u and v to denote tuples of variables. We use a, b, c, d, e, x and y to denote tuples of parameters. We use s, t and r exclusively for unary variables or parameters. We denote by |u| the length of a tuple u.
For any (partitioned) formula ϕ(u, v), let ϕ opp (v, u) denote the same formula after switching the order of the variables u and v. For any formula ϕ(u), possibly with parameters from N, and set A ⊆ N |u| , let ϕ(A) = {a ∈ A : N |= ϕ(a)}. Throughout and unless stated otherwise "definable" means "definable over M".
We use notation a, b for ordered pairs, setting aside the notation (a, b) for intervals. For a definable set B ⊆ M n+1 and an element x ∈ M n we denote the fiber of B at x by B x = {t ∈ M : x, t ∈ B}.
We refer to n-types as types in S n (M), interpreted as non-trivial ultrafilters in the Boolean algebra of definable subsets of M n . Recall that an n-type p is definable if, for every formula ϕ(u, v) with |u| = n, the set {b ∈ M |v| : ϕ(M n , b) ∈ p} is definable. By type basis for a type p we mean a filter basis, meaning a subset q ⊆ p such that any set in p is a superset of some set in q.
We fix some standard notation regarding o-minimal cells. For a function f we denote its domain by dom(f ). Let us say that a partial function M n → M ∪ {−∞, +∞} is definable if either it maps into M and is definable in the usual sense or otherwise it is constant and its domain is definable. Given two such functions f and g let ( We direct the reader to [vdD98] for background o-minimality. In particular in our proof we will use o-minimal cell decomposition [
Preorders
A preorder is a reflexive and transitive relation. A preordered set (B, ) is a set B together with a preorder on it. It is definable if the preorder is definable. We use notation b ≺ c to mean b c and c b.
Let p ∈ S n (M) be an n-type and G be the collection of all definable partial functions f : Without loss of clarity we will often abuse notation and refer to too as the total preorder on the index set B given by b c if and only if f b f c . Note that, for any set A ⊆ M, if F and p are A-definable, then on B is A-definable too.
Proof of the Marker-Steinhorn Theorem
We state and prove the more contentful direction of the Marker-Steinhorn Theorem, and direct the interested reader to [MS94, Corollary 2.4] for the proof of the reverse implication.
Theorem 2.1 (Marker-Steinhorn Theorem [MS94]). Let M be an o-minimal structure and let N be a tame extension of M. For every a ∈ N n , the type tp(a/M) is definable.
Proof. Let us fix a ∈ N n and a formula ϕ(a, u), |u| = m. We must prove that the set ϕ(a, M m ) is definable. We do this by induction on n and m, where in the inductive step we assume that it holds for any n ′ , m ′ smaller than n, m in the lexicographic order. We may clearly assume that a / ∈ M n . The case n = 1 (for any m) follows easily from o-minimality and tameness. In particular, let s a be the supremum in M ∪ {−∞, +∞} of (−∞, a) ∩ M. If s a < a then tp(a/M) has a definable basis of the form {(s a , t) : s a < t, t ∈ M}, otherwise it has a definable basis of the form {(t, s a ) : t < s a , t ∈ M}.
Suppose onwards that n > 1 and let a = d, e ∈ N n−1 × N. Let {ψ i (N m+n ) : 1 ≤ i ≤ l}, be a (0-definable) cell partition of ϕ opp (N m+n ). For particular N |= ϕ(a, b) if and only if N |= ψ i (b, a) for some i. So to prove the theorem it suffices to pass to an arbitrary 1 ≤ i ≤ l and show that ψ i (M m , a) = ψ opp i (a, M m ) is definable. Hence, by passing from ϕ(a, u) to ψ opp i (a, u) if necessary, we may assume without loss of generality that all the sets of the form ϕ (N n , b) Since B is definable, to prove that P and Q are both definable it suffices to show that either one of them is. If P has a supremum in B ∪ {−∞, +∞} with respect to then the result is immediate, so we assume otherwise. In particular we have that, for every b ∈ P and c ∈ Q, dim(b, c) > 0.
Note that, to prove definability of P , it suffices to show that there exists a definable set P ′ ⊆ P that is cofinal in P , since then P = {b ∈ B : b c for some c ∈ P ′ }. Similarly it is enough to show the existence of a definable Q ′ ⊆ Q coinitial in Q to prove the definability of Q. So we may always pass to a definable subset B ′ ⊆ B such that either B ′ ∩ P is cofinal in P or B ′ ∩ Q is coinitial in Q, and then prove definability of B ′ ∩ P or B ′ ∩ Q. Hence, after passing if necessary to one such B ′ of minimum dimension, we may assume that, for any b ∈ P and c ∈ Q, Moreover, observe that in any finite partition of B there is always going to be a set B ′ such that either B ′ ∩ P is cofinal in P or B ′ ∩ Q is coinitial in Q. So, by o-minimal cell decomposition, we may assume that B is a cell.
Suppose that m = 1. Let B(N ) denote the set of b ∈ N m such that ϕ (N n , b) is a cell of the form (f b , g b ) for two {b}-definable continuous functions f b and g b and N |= ∃t ϕ(d, t, b). Clearly this is a superset of B, definable in N over {d}. For each b ∈ B(N ) letf (b) = f b (d). Note thatf is also definable in N over {d}. By o-minimality there exists a partition, definable over {d}, of B(N ) into points and intervals such that, on each interval,f is continuous and either constant or strictly monotonic. Since tp(d/M) is definable, the intersections of these cells with B are definable (in M). Observe that, on any such intersection, the restriction of the preorder is either ≤, ≥, or ≤ ∪ ≥ (the trivial relation where any two points are indistinguishable), depending respectively on whetherf is strictly increasing, decreasing or constant. We fix one such interval I on whichf is continuous and either constant or strictly monotonic and show that I ∩ P , or equivalently I ∩ Q, is definable.
If I ∩ P = I ∩ B or I ∩ Q = I ∩ B then the result is immediate. Otherwise there exist b, c ∈ I ∩ B such thatf (b) < d andf (c) > d. By continuity there must exist r in the subinterval of I with endpoints b and c withf (r) = d. By tameness J = (−∞, r) ∩ M is definable. Finally note thatf | I is not constant, and thus it is strictly monotonic. If it is increasing then it must be that J ∩ I = P ∩ I and otherwise J ∩ I = Q ∩ I. Now suppose that m > 1. For every x in the projection π(B) of B to the first m − 1 coordinates, let x be the definable preorder on the fiber B x given by s x t if and only if x, s x, t . Note that, following the arguments in the case m = 1, the fibers P x = {t ∈ M : x, t ∈ P } and Q x = {t ∈ M : x, t ∈ B} are definable, and moreover B x can be partitioned into finitely many points and intervals where the restriction of x is either ≤, ≥, or ≤ ∪ ≥.
If there exists some x ∈ π(B) such that {x}×P x is cofinal in P or {x}×Q x is coinitial in Q, then we are done. Suppose otherwise. We complete the proof by partitioning B into finitely many definable sets with the following property. For each set C in the partition and x ∈ π(C), either C x ⊆ P x or C x ⊆ Q x . Observe that the set Θ of all x ∈ π(C) such that C x ⊆ P x is described by so, by induction hypothesis (applied in the case n, m − 1 ), this set is definable. It follows that C ∩ P = x∈Θ ({x} × C x ) is definable, and we may conclude that P is definable.
Recall that B is a cell. If it is defined as the graph of a function then, by taking C in the above paragraph to be B, we are done, so we assume otherwise. In particular for any x ∈ π(B) the fiber B x is an interval. Let x ∈ π(B). By (⋆), since {x} × P x is not cofinal in P and {x} × Q x is not coinitial in Q, the dimension of . Now recall that the fibers P x and Q x are definable. Suppose that there exists a maximal subinterval I ′ of P x that is bounded in B x (with respect to the order <), and let r ∈ B x be its right endpoint. Then in particular r is the left endpoint of a subinterval I ′′ of Q x . If r / ∈ P x then the set {b ∈ B : {x} × I ′ ≺ b ≺ x, r } has dimension dim(B). If however r ∈ P x , then the set {b ∈ B : x, r ≺ b ≺ {x} × I ′′ } has dimension dim(B). If r is the right endpoint of a a maximal subinterval of Q x that is bounded in B x then the analogous holds. Note that, if there exists s, t ∈ B x with s ∈ P x and t ∈ Q x , there will always be some r in the closed interval between s and t with the described properties.
For any b = x, t ∈ B, let L(x, t) be the set of c ∈ B such that either These sets are definable uniformly on b ∈ B. Let D be the definable set of all b ∈ B such that U(b) ∪ L(b) has dimension dim(B). By the above paragraph, for every x ∈ π(B) and s, t ∈ B x , if s ∈ P x and t ∈ Q x , then there is some r in the closed interval between s and t such that x, r ∈ D.
We now show that, for every x ∈ π(B), the fiber D x is finite. Then the proof is completed by taking any finite cell partition of B compatible with D, since, for any set C in said partition and x ∈ π(C), the fiber C x is going to be either a point or an interval contained in P x or Q x .
Towards a contradiction suppose that D x is infinite for some x ∈ π(B). Let J ′ be a subinterval of D x where x is either ≤, ≥ or ≤ ∪ ≥. If x equals ≤ ∪ ≥ then, for every t ∈ J ′ , the sets U(x, t) and L(x, t) are empty, contradicting that dim(L(x, t) ∪ U(x, t)) = dim(B) > 0. Suppose that x is either ≤ or ≥. Note that, for any distinct s, t ∈ J ′ , the sets L(x, s), U(x, s), L(x, t) and U(x, t) are pairwise disjoint. Using the fact that dim(L(x, t) ∪ U(x, t)) = dim(B) for every t ∈ J ′ , and applying the Fiber Lemma for ominimal dimension [vdD98, Chapter 4, Proposition 1.5 and Corollary 1.6], we derive that dim t∈J ′ (L(x, t) ∪ U(x, t)) > dim(B), contradiction. | 2023-06-19T01:15:54.633Z | 2023-06-16T00:00:00.000 | {
"year": 2023,
"sha1": "4b93eab5ce1b127b1653cb6c82c5829ecf4d1fe5",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "4b93eab5ce1b127b1653cb6c82c5829ecf4d1fe5",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
256236655 | pes2o/s2orc | v3-fos-license | On the fractional p-Laplacian problems
This paper deals with nonlocal fractional p-Laplacian problems with difference. We get a theorem which shows existence of a sequence of weak solutions for a family of nonlocal fractional p-Laplacian problems with difference. We first show that there exists a sequence of weak solutions for these problems on the finite-dimensional subspace. We next show that there exists a limit sequence of a sequence of weak solutions for finite-dimensional problems, and this limit sequence is a sequence of the solutions of our problems. We get this result by the estimate of the energy functional and the compactness property of continuous embedding inclusions between some special spaces.
Introduction
The nonlocal fractional p-Laplacian problems with difference appear in the models of nonlinear fractional Laplace flows such as the parabolic boundary value problems with time derivative and the fractional p-Laplacian differential operators. The fractional Laplacian flows arise in applications of nonlinear elasticity theory, electro rheological fluids, non-Newtonian fluid theory in a porous medium (cf. [9,31,40]).
In this paper we consider a family of the fractional p-Laplacian problems of Rothe type with difference under boundary and initial conditions: (-) s g p u n + λV (x)|u n | p-2 u n + |u n | r-1 u n -|u n-1 | r-1 u n-1 h = 0 in , (1.1) where is a bounded domain of R N , N ≥ 3, with smooth boundary ∂ , s ∈ (0, 1), p is a real constant, 2 ≤ p ≤ N , r = p * s -1 = Np N-sp -1, g p is a continuous function defined by g p (t) = |t| p-2 t, t = 0, g p (0) = 0, λ > 0, V : → [0, ∞) is a continuous function, and u n is a measurable function defined on with valued into R, n = 1, 2, . . . , and (-) s g p is the fractional p-Laplacian operator defined as follows: for each x ∈ R N and any u ∈ C ∞ 0 ( ), (-) s g p u(x) = P.V. g p |u(x)u(y)| |x -y| s u(x)u(y) |u(x)u(y)| dy |x -y| N+s = P.V.
In the last years, for pure mathematical research and concrete real-world applications, the fractional p-Laplacian operator has been studied on the fractional Sobolev space W s L g p ( ) = u ∈ L g p ( ) : |u(x)u(y)| p |x -y| N+sp dx dy < ∞ , where L g p ( ) is the Banach space defined by L g p ( ) = u : → R is a measurable function : The fractional p-Laplacian operator and the fractional Sobolev space arise in many fields of science, for example, elastic mechanics (see [40]), electro-rheological fluid dynamics(see [31]), and image processing (see [6]) and the references therein. When 0 < s < 1, (-) s is the usual fractional Laplacian operator defined by: for each x ∈ R N and any u ∈ C ∞ 0 ( ), where P.V. denotes the Cauchy principle value. Since 0 < s < 1, (-) s is called the fractional Laplacian operator. For the fractional Laplacian operator, see [8,10,19] and the references therein. The fractional Laplacian problems arise from continuum mechanics, phase transition phenomena, population dynamics, minimum surfaces, and game theory. The body of literature on the fractional Laplacian operators and their applications is quite large. We refer the reader to [3,12,13,[24][25][26][27][28][29][33][34][35][36][37][38] and the references therein. For the basic properties of the fractional Sobolev spaces, we refer the readers to [10]. If s → 1 -, (-) 2 reduces to -. For s = 1, we identify (-) s with the classical Laplacian operator -. If 2 < s < ∞, (1.1) is called s-exponent problems of elliptic type. The s-exponent Laplacian problems of elliptic type appear in a lot of applications, for example, elastic mechanics, electro-rheological fluid dynamics, and image processing. We refer the readers to [2,9,11,22,23,31] and the references therein. In [5,6,16], there are some papers concerning related equations involving the fractional Laplacian operator, but results for fractional Sobolev spaces and the fractional Laplacian operator with exponent are few. In particular, the fractional Laplacian operator with variable exponent was suggested firstly by Lorenzo and Hartley [20]. The fractional Laplacian operator with variable exponent and the variable exponent fractional Sobolev space have appeared in a nonlinear diffusion process. Some diffusion processes reacting to temperature changes can be explained well by fractional derivatives in a nonlocal integro differential operator (see [21]). In [17,18,30], the authors consider the pseudodifferential equations on the fractional Sobolev spaces.
In recent years, the Kirchhoff equations involving fractional p-Laplacian have attracted interest and have been researched by some mathematicians. In particular, when s = 1 and p = 2, -is the classical Laplace operator. Ji, Fang, and Zhang [15] provided multiplicity results of solutions for asymptotically linear Kirchhoff equations by using a variant version of mountain pass theorem and the variational method. When 0 < s < 1 and p = 2, Fiscella [14] provided the existence of solution for a class of Kirchhoff-type problems involving fractional Laplacian operator singular term and a critical nonlinearity. When 0 < s < 1 and 1 < p < N/s, Xiang, Zhang, and Rǎdulescu [39] obtained multiplicity results for superlinear Schrödinger-Kirchhoff equations involving fractional N/s-Laplacian with critical exponential nonlinearity by using the concentration compactness principle in the fractional Sobolev and mountain pass theorem. When 0 < s < 1 and p = N/s, Mingqi, Rǎdulescu, and Zhang [25] provided existence and multiplicity of solutions for Kirchhoff equations involving fractional N/s-Laplacian with critical nonlinearity by the mountain pass geometry and Ekeland's variational principle. They [26] also obtained the existence and multiplicity results of solutions for Kirchhoff equations involving fractional N/s-Laplacian with singular exponential nonlinearity by using the same methods.
The weak solutions u n ∈ W s L g p ( ) of (1.1) are a measurable function defined on with valued into R, n = 1, 2, . . . , and satisfy the following in weak sense: for any w ∈ W s L g p ( ) and Our main result is as follows. The outline of the proof of Theorem 1.1 is as follows: We first prove the existence of a sequence of weak solutions for a family of the fractional p-Laplacian difference equations defined on the finite-dimensional subspace. We next show that there exists a limit sequence of the sequence of weak solutions for the finite-dimensional problem, and this limit sequence is the sequence of the solutions of our problem. We get this result by the estimate of the energy functional and the compactness property of the continuous embedding inclusions between some special spaces. In Sect. 2, we introduce the fractional Lebesgue space with exponent and the fractional Sobolev space and give some properties. In Sect. 3, we first prove that problem (1.1) defined on the finite-dimensional subspace has a sequence of weak solutions for each n = 1, 2, . . . . In Sect. 4, we show that there exists a limit sequence of the sequence of weak solutions for finite-dimensional problem, and this limit sequence is a sequence of solutions of our problem (1.1).
Preliminaries
For the variational setting for our problem, we introduce some definitions and theories on the fractional Lebesgue space with exponent and the fractional Sobolev space.
Let N ≥ 3 and be a bounded open domain in R N with smooth boundary ∂ . Let 2 ≤ p < ∞ and r = Np N-sp -1. The Lebesgue space with p-exponent is The Sobolev space with p-exponent is Then L p ( ) and W 1,p ( ) are Banach spaces. We also define the Sobolev space W 1,p 0 ( ) as the closure of C ∞ 0 ( ) in W 1,p ( ). The space is also a reflexive Banach space. If p is bounded, then norm · W 1,p ( ) is equivalent to the norm [·] W 1,p ( ) . If p = ∞, L ∞ ( ) is the Banach space of essentially bounded. If p is bounded and p is the conjugate exponent of p defined by p = p p-1 , then the dual space (L p ( )) can be identified with L p ( ). If 1 < p < ∞, then the Lebesgue space L p ( ) with p-exponent is separable and reflexive. In L p ( ), Let L g p ( ) be the space defined by Then (L g p ( ), u L gp ) is a Banach space whose norm is equivalent to the Luxemburg norm In L g p ( ), Hölder's inequality is valid: Now we introduce the fractional Sobolev space with p-exponent. Let 0 < s < 1 and 2 ≤ p < ∞. The fractional Sobolev space with p-exponent is defined by Let W s 0 L g p ( ) denote the closure of C ∞ 0 ( ) in the norm u s,g p . The following lemma shows that the norm [·] s,g p is a norm of W s L g p ( ) equivalent to · s,g p .
Lemma 2.2 ([32]
; Generalized Poincaré inequality) Let 0 < s < 1 and 2 ≤ p < ∞. Then there exists a positive constant C > 0 such that That is, the embedding is continuous and compact. Furthermore, [u] s,g p is a norm of W s 0 L g p ( ) equivalent to · s,g p .
, then there exists a constant C 1 = C 1 (N, p, q, s) > 0 such that Proof By Theorem 6.7 and Theorem 6.9 of [10], for N > sp and any fixed constant exponent q ∈ (1, Np N-sp ], W s 0 L g p ( ) is continuously embedded into L g q ( ). It follows that (2.2) holds. By combining inequalities (2.1) and (2.2), [u] s,g p is an equivalent norm of W s 0 L g p ( ). It follows that (2.3) holds.
Lemma 2.4
Let 0 < s 1 < s < s 2 < 1 and 2 ≤ p < ∞. Then the embeddings Moreover we have It follows from this inequality that we can easily verify that the embedding W s 2 0 L g p ( ) → W s 0 L g p ( ) is continuous. Similarly, for any u ∈ W s 0 L g p ( ), we have Thus we have It follows that the embedding W s 0 L g p ( ) → W s 1 0 L g p ( ) is continuous. Thus the proof of the lemma is complete. Furthermore, [u] s,g p is a norm of W s 0 L g p ( ). Moreover, there exists a constant C 2 = C 2 (N, p, s) > 0 such that Since 0 < s < 1 and N > sp, there exists a constant τ 1 > 0 such that Since is bounded, there exist a constant > 0 and l numbers of disjoint hypercubes on V i , i = 1, 2, . . . , l. By Lemma 2.3, Theorem 6.7, and Theorem 6.9 of [12], there exists a constant D = D(N, s, p) such that By Hölder's inequality, if q ∈ (1, p * s ], we have We note that Thus we have It follows from (2.4) that Thus the embedding W s L g p ( ) → L g q ( ) is continuous. Furthermore, we show that the embedding is compact. In fact, in the constant p * s on V i , for q ∈ (1, p * s ] on V i , the embedding W s L g p (V i ) → L g q (V i ) is compact. Thus the embedding W s L g p ( ) → L g q ( ) is compact. It follows that there exists a constant D = D(N, p, s) > 0 such that u g q ≤ D u s,g p . By Lemma 2.2, we have the following lemma.
We need the following inequality for the p-Laplacian operator.
Then there exist constants C 1 and C 2 depending on p and N such that, for any ξ , η ∈ R N , We recall a fundamental fact, which is a crucial role for our main result.
Lemma 2.8 ([4]) Assume that Q is a continuous vector field from R N to R N and satisfies
for some ρ > 0. Then there exists a point x ∈ B ρ (0) such that where B ρ (0) denotes a ball centered at the origin with radius ρ in R N .
Existence of approximating solutions
In this section we show that there exists a unique approximating solution for (1.1) on each finite-dimensional subspace. Let us choose a family of bases is dense in W s 0 L g p ( ), any element u n in W s 0 L g p ( ) and the initial data u 0 can be expanded as Let us define the finite subspace F k of W s 0 L g p ( ) by Let N be any positive integer which shall be sent to infinity and h be any small positive number. For any fixed integer k = 1, 2, . . . , let u n,k = k i=1 a i n,k φ i (x) be a family of the Galerkin approximating solutions for a family of fractional Laplace equations with p-exponent and difference defined on the finite-dimensional subspaces.
Let us set Let us define the functional J i n,k (ρ) by Let us define the functional J n,k = (J 1 n.k , . . . , J k n,k ) : R k → R k . Then J n,k is continuous on ρ and satisfies We claim that J n,k (ρ) · ρ ≥ 0. In fact, by Young's inequality and generalized Poincaré's inequality of Lemma 2.2, for any > 0, there exists a constant C > 0 such that
Proof (i) The sequence u n-1 ∈ L g r+1 ( ) is defined inductively and by Lemma 3.2, {u n,k } is bounded in W s 0 L g p ( ). Since the embedding W s 0 L g p ( ) → L g q ( ) is continuous and compact for any q with 1 ≤ q < Np N-p = r + 1, the embedding W s 0 L g p ( ) → L g r ( ) is continuous and compact. Thus the sequence {u n,k } has a subsequence, up to a subsequence, {u n,k } converging strongly to lim k→∞0 u n,k = u n in L g r ( ).
(ii) By Lemma 2.7 (i), there exist constants C > 0 and C > 0 such that |u n,k | r-1 u n,k -|u n | r-1 u n dx ≤ C |u n,k | r-1 + |u n | r-1 |u n,ku n | dx ≤ C |u n,k | r + |u n | r dx r-1 r |u n,ku n | r dx 1 r ≤ C u n,ku n L gr .
Since by (i) u n,k → u n strongly as k → ∞ in L g r ( ) and u n ∈ L g r ( ), it follows that |u n,k | r-1 u n,k -|u n | r-1 u n ∈ L 1 ( ).
Proof of Theorem 1.1 By Lemma 3.1, for each n = 1, 2, . . . , N and k = 1, 2, . . . , there exists a unique weak solution u n,k ∈ F k ⊂ W s 0 L g p ( ) of (3.1). By Lemma 4.1, there exists a subsequence, up to a subsequence, {u n,k } converging strongly to lim k→∞∞ u n,k = u n in L g r ( ).
We shall show that u n satisfies (1.1). That is, we shall show that, for any w ∈ W s 0 L g p ( ), (-) s g p u n · w dx + λ V (x)|u n | p-2 u n · w dx + |u n | r-1 u n -|u n-1 | r-1 u n-1 h · w dx = 0, i.e., |u n (x)u n (y)| p-2 |x -y| s(p-2) u n (x)u n (y) |x -y| s w(x)w(y) |x -y| N+s dx dy In fact, for any w ∈ W s 0 L g p ( ), let w k = k i=1 h n,i φ i (x) be the approximating sequence which converges to w in W s 0 L g p ( ). By Lemma 2.7 (ii), there exists a constant C 2 > 0 such that On the other hand, putting w = u n,k in (3.1), we have -(-) s g p u n,k · u n,k dx = λ V (x)|u n,k | p-2 u n,k · u n,k dx + |u n,k | r-1 u n,k -|u n-1 | r-1 u n-1 h · u n,k dx. Taking the test function as w ku n,k in (3.1), we have -(-) s g p u n,k · (w ku n,k ) dx = λ V (x)|u n,k | p-2 u n,k · (w ku n,k ) dx + |u n,k | r-1 u n,k -|u n-1 | r-1 u n-1 h · (w ku n,k ) dx. By adding (4.1) and (4.2), we have ((-) s g p w k · (w ku n,k ) + λ V (x)|u n,k | p-2 u n,k · (w ku n,k ) dx + |u n,k | r-1 u n,k -|u n-1 | r-1 u n-1 h · (w ku n,k ) dx ≥ 0. By the energy estimate theorem Lemma 3.2, there exists a constant C 1 > 0 such that (-) s g p u n,k · u n,k dx + λ V (x)|u n,k | p dx + r (r + 1)h |u n,k | r+1 dx = |u n,k (x)u n,k (y)| p |x -y| N+sp dx dy + λ V (x)|u n,k | p dx + r (r + 1)h |u n,k | r+1 dx it follows that the sequence {u n,k } is bounded in L g p ( ) and so, up to a subsequence, u n,k converges to u n weakly in L g p ( ).
Passing to the limit as k → ∞, we have that the first part and the second part of the lefthand side of (4.4) ((-) s g p w k · (w ku n,k ) + λ V (x)|u n,k | p-2 u n,k · (w ku n,k ) dx − → ((-) s g p w · (wu n ) + λ V (x)|u n | p-2 u n · (wu n ) dx. On the other hand, by (ii) of Lemma 4.1, u n,k → u n a.e., in and by Vitali's converging theorem, up to a subsequence, |u n,k | r-1 u n,k converges to |u n | r-1 u n weakly in L r+1 r ( ). | 2023-01-26T14:43:01.916Z | 2021-02-26T00:00:00.000 | {
"year": 2021,
"sha1": "96d0c42c135ae79a65935986a9b9c2377b201af1",
"oa_license": "CCBY",
"oa_url": "https://journalofinequalitiesandapplications.springeropen.com/counter/pdf/10.1186/s13660-021-02569-z",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "96d0c42c135ae79a65935986a9b9c2377b201af1",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": []
} |
270085010 | pes2o/s2orc | v3-fos-license | Antagonistic activity of two Bacillus strains against Fusarium oxysporum f. sp. capsici (FOC-1) causing Fusarium wilt and growth promotion activity of chili plant
Fusarium oxysporum f. sp. capsici (Foc) poses a significant position in agriculture that has a negative impact on chili plant in terms of growth, fruit quality, and yield. Biological control is one of the promising strategies to control this pathogen in crops. Chili is considered as one of the most important crops in the Hyderabad region that is affected by Fusarium wilt disease. The pathogen was isolated from the infected samples in the region and was confirmed by morphological characteristics and PCR with a band of 488 bp. The bacterial strains were isolated from the rhizosphere soil of healthy plant and also confirmed by PCR with a band of 1,542 bp.The molecular characterization of the fungal and bacterial strain has shown 99.9% homology with the retrieved sequences of Fusarium oxysporum f. sp. capsici and Bacillus subtilis from NCBI. The 1-month-old Ghotki chili plants were inoculated with 1×105 cfu spore/ml−1 suspension and confirmed that the FOC-1 is responsible for chili Fusarium wilt disease. Subsequently, among the 33 screened Bacillus strains, only 11 showed antagonistic activity against F. oxysporum. Out of these, only two strains (AM13 and AM21) have shown maximum antagonistic activity against the pathogen by reducing the infection and promoting growth parameters of chili plants under both in vitro and greenhouse conditions. The study suggested that biological control is the most promising control strategy for the management of Fusarium wilt of chili in the field.
Introduction
Chili (Capsicum annum L), belongs to Solanaceae family, is one of the most significant crops (Magaña-López et al., 2022) and is widely grown for its spicy taste, pungency, and color.It is a highly rich source of vitamins A and B and is also used in different types of foods, medicines, and cosmetics (Jamil et al., 2021;Akash et al., 2022).In Pakistan, chili crops cover approximately 91,800 hectares of land with an annual production of 115,000 tons (Anjum et al., 2020).The limited yield production of this crop is the main challenge in Pakistan and worldwide.It is very sensitive to various soil borne diseases such as, Fusarium wilt, damping-off and root rot caused by various genera of species, including Pythium, Phytophthora, Fusarium, Sclerotium, and Rhizoctonia (Dar et al., 2015;Hyder et al., 2021).Among them, Fusarium wilt of chili caused by Fusarium oxysporum f. sp.capsici is one of the aggressive and damaging diseases, causing huge losses in the crop annually (Velarde-Félix et al., 2018;Tilahun et al., 2024).The pathogen survives for many years in soil debris and has the ability to cause infection from the stage of seedling to fruiting (Shen et al., 2013).After infection, plants showing various symptoms such as dropping leaves, yellowing color, curling, stunted growth, and shorter distances between internodes become dry and eventually lead to death (Shaheen et al., 2021).This pathogen commonly infects the solanaceous crop, especially chili, with 17-22% disease incidence, leading to a decrease in yield by 90.5 to 115.5 thousand tons in Pakistan (Bashir et al., 2018;Serrano-Jamaica et al., 2021).
To combat plant diseases, chemical pesticides and fungicides offer suitable control in the field, but the high use of these chemicals has been reported to become a reason for environmental pollution and may lead to human health issues, such as cancer (Chen and Ying, 2015).The growing interest in this emerging field can be attributed to a widespread desire to decrease dependence on agrochemicals, owing to their adverse impacts on human health and the environment (Iqbal et al., 2023a).Therefore, biological control could be a useful and effective approach to manage wilt disease in both greenhouse and field and will also promote the production of chili crops.Biological control is an alternative method that promotes sustainable and environmentally friendly agricultural practices (Iqbal et al., 2023b).Trichoderma, Paecilomyces, Bacillus, and Pseudomonas are most four acceptable genera, comprising more than 200 species which have exhibited remarkable abilities to control a large number of plant diseases while promoting plant growth (Peix et al., 2009;Rivas et al., 2009;Cai et al., 2022;Xu et al., 2023).Out of these, Bacillus is one of the most promising biocontrol genus in the agriculture field, controlling soil-borne diseases in many crops (Yuan et al., 2012;Sun et al., 2023), which is also confirmed as plant growth-promoting bacteria (PGPB).B. subtilis have the ability to reduce disease incidence and increase plant growth and survival through two different mechanisms such as direct and indirect (Khan et al., 2018).After the successfully suppressed mycelial growth of two soil-borne pathogens in tomato crops, B. subtilis PTS-394 was evaluated against the root rot pathogen of chili Fusarium solani and found excellent results (Qiao et al., 2023).B. subtilis showed broadspectrum activity against F. oxysporum, leading reduced incidence and increased plant growth and parameters in chili crop by producing various antibiotics (Yu et al., 2011).These findings suggest that B. subtilis could provide excellent control against chili wilt disease caused by Fusarium oxysporum f. sp.capsici.
Chili plants were severely infected to wilt disease in Hyderabad region of Sindh province.The infected plants were collected to isolate and identify the causal pathogen responsible for this disease.Through molecular characterization, we confirmed that the causal pathogen is Fusarium oxysporum f. sp.capsici.We evaluate Bacillus subtilis strains against this pathogen under laboratory and greenhouse conditions.The greenhouse experiment was conducted to check the growth promotion and biological activity of chili plants.
Sample collection and isolation of the pathogen
During the field survey, chili plants were severely affected with wilt disease and showed symptoms of complete and partial dead plants in Hyderabad region of Sindh province, Pakistan.The infected plants showed stunted growth, minimum fruits, and yellowing to brownish color on the infected leaves.The disease samples were collected in the paper envelop and brought to the Disease Diagnostic Laboratory at the Department of Plant Protection in Sindh Agriculture University, Tando Jam,Hyderabad, Pakistan.The roots of the infected plants were washed with tap water three times.After that, the infected roots were surface sterilized with 70% ethanol and soaked on the filter paper.The roots were cut into small pieces and placed on potato dextrose agar (PDA) containing Petri plates.In total, 1 ml of streptomycin and penicillin antibiotic were mixed in the PDA medium for bacterial contamination.In each Petri plate, five pieces were placed and sealed with parafilm tap and were incubated at 27°C ± 2 0 C for 3 days.
Purification and morphological characteristics of pathogen
The fluffy white, creamy white, or yellowish creamy color colonies were picked carefully and placed on PDA containing plates at the same aforementioned concentration of antibiotics.After ful growth of pathogen, the single hyphae were taken carefully and placed again on the PDA medium plates for purification.The taxonomy of the pathogen was studied carefully with the previous reported literature (Booth, 1971).The picture of microstructures of mycelium, chlamydospore, microconidia and macroconidia were taken under microscope (Carl Zeiss Microimaging GmbH 37,081 Göttingen, GERMANY), to preliminarily clarify their morphological or structural characteristics.
Collection of rhizosphere soil for bacterial strains
To isolate bacterial strains, the rhizosphere soil was collected from five healthy chili plants.The plants were selected randomly and top soil was removed, and then, 15 soil samples were taken from 5-7 inch depth in the plastic bags with the help of soil auger.Next, samples were brought to the Disease Diagnostic Laboratory of the Department of Plant Protection in Sindh Agriculture University, Tandojam, Hyderabad, Pakistan, for the isolation of bacterial strains.
Isolation and purification of bacterial strains from soil
Overall, 5 g of soil sample was taken from each soil sample and soaked for half an hour.After that, the sample was added to 100 ml conical flask containing 45 ml distilled sterilized water and 0.45 g NaCL.The sample was properly shaken in the hot water bath at 37 0 C with 200 rpm for 30 min.After completely mixed, samples were divided into five 2-ml sterilized tubes and were head-shock in the hot water bath at 80 0 C for 10 min.In total, 0.1 ml of suspension from each tube was taken and spread on 90 mm Petri plates containing nutrient agar (NA) medium.The plates were sealed with parafilm tape and were placed upside-down position in the incubator at 37 0 C + 2 0 C for 8-12 h.A milky white, creamy white, and yellowish white streaky color colonies were visually examined with different size or shapes on the plates.Colonies were picked carefully to re-streak on NA medium for purification.All strains were preserved at −20°C for further study.To identify the Bacillus strains, their morphology was evaluated based on colony type, bacterial shape, size, and growth characteristics in the NA medium (Bergey, 1994).
Screening of bacterial strains
A total of 33 bacterial strains were screened by the dual assay method for their antagonistic activity on PDA medium against Fusarium oxysporum f. sp.capsici FOC-1 (Gupta et al., 2001).A filter paper was cut in 5 mm size and 3 pieces were placed on PDA plate at 90 0 angle.Two strains were tested in each plate.In total, 0.3 μl of actively grown bacterial suspension was swamped on two filter papers and 0.3 μl of distilled sterilized water (ddH2O) was flooded on one filter paper for control (Utkhede and Sholberg, 1986).The plates were sealed with parafilm tape and incubated at 28 0 C + 2 0 C for 2 days as overturned position.The plates were examined on a daily basis, and growth inhibition zone (GIZ) of fungus was determined in diameter.
Antagonistic activity
Among 33 bacterial strains, 11 bacterial strains were selected based on screening, and their antagonistic activity was checked against FOC-1 as the aforementioned method of these 11 strains.All bacterial strains were grown on NA medium as aforementioned method.Next, the strains were used against the pathogen on PDA medium.The growth inhibition zone (GIZ) was recorded as the aforementioned method.Out of these 11 strains, 2 strains that have shown maximum biological potential activity against the pathogen were selected for the greenhouse study of chili plant growth promotion.
Pathogenicity assay
The Ghotki Chili variety seeds were grown in the incubator at room temperature (30 0 C + 2 0 C) and transferred in the thermopole pots of 8 cm in size.Next, a 30 mm water added in the purified plates of pathogen and scratched mycelia with the help of sterile applicator for fungal suspension.The hemocytometer was used to adjust to 1 × 10 5 spore per ml colony forming unit (CFU) concentration of pathogen suspension.The 1-month-old chili plants were inoculated with 1 × 10 5 cfu spore per ml suspension.The inoculated plants were observed continuously.After 7-10 days of inoculation, symptoms of the disease were observed on chili plants.The infected plants were taken to the laboratory, and then photograph of the diseased plants was taken; causal pathogen was successfully re-isolated from infected plant roots using the aforementioned method.Finally, the pathogenicity was confirmed based on Koch's postulate.
DNA extraction
For the extraction of DNA, a CTAB method developed by Doyle and Doyle (1987) was utilized with slight modification from the biocontrol agent culture and other microorganisms.A nanodrop spectrophotometer was used to check the DNA concentration and purity using method by Li et al. (2006).Furthermore, the DNA concentration and purity were analyzed by running the samples on 1% agarose gel for 30 min.
PCR-based detection
In PCR-based detection, a pair of primers, ITS1 and ITS4, was evaluated to amplify the pathogen, and 16 s RNA sequence primer was used for bacteria (White et al., 1990).The PCR reactions including reagents were 1.5 μl of each primer, 7 μl of master mix, and 0.5 μl of Platinum Taq-polymerase in a total volume of 12.5 μl of reaction.An automated thermal cycler was employed to conduct the PCR amplification with a protocol consisting of an initial denaturation at 96°C for 9 min, followed by 40 cycles of denaturation at 96°C for 30 s and annealing at 53°C for 1 min.The final extension was carried out at 72°C for 7 min.The amplified products were detected on a 1.5% agarose gel containing ethidium bromide (Li et al., 2006).
Characterization of the strains
The manufacturer's recommendations (Bio Product) were followed in sequencing the PCR-amplified products that were positive.BioEdit version 7.2 software was used to analyze the sequences (Hall, 1999) and compared with the retrieved sequences from (NCBI) blast tool.After that, the sequence was uploaded to MEGA-7 software and align with the help of ClustalW program (Kumar et al., 2016).A phylogenetic tree was constructed with the help of neighbor joining method with 1,000 bootstrap value and Tamura 3-parameter model (Kong et al., 2000).
Growth promotion and biological activity
In this experiment, two bacterial strains were selected on the basis of high antagonistic activity against Fusarium oxysporum f. sp.capsici FOC-1.The Ghotki chili variety seeds were collected from a nearby shop and brought to the laboratory.The seeds were washed three times with distilled water and surface sterilized with 70% ethanol.Next, 20 seeds were dipped in the bacterial suspension for growth promotion and biological assay for 30 min and dried on sterilized blotting paper.Similarly, the control seeds were dipped in sterilized distilled water (ddH2O) for 30 min.The seeds were placed on sterilized filter paper in 90 mm petri plate and incubated at (30 0 C + 2 0 C) for 3 days.After growing in the incubator, the five seedlings were transferred to each 80 cm soil pot containing sterilized soil with peat moss at a ratio of 3:1.The plants were examined regularly, and 50 ml ddH2O water was added on a daily basis.In the biological assay, 1×10 5 cfu spore/ml of pathogen suspension was placed in the roots of plants after 15 days.After 1 month, data of plant parameters were recorded, such as root length, shoot length, fresh weight, and dry weight.
Data analysis
Statistical parameters such as mean, standard deviation, analysis of variance and LSD multiple comparison tests were calculated using the Statisix-8.1 package.The GraphPad prism 8 version was used to develop the graph and edited or merged with the help of Adobe Illustrator CC 2019.
Isolation and purification of pathogen
During the field visit investigation, chili plants severely infected by wilt disease and has shown symptoms of complete and partial dead plants in Hyderabad region of Sindh province, Pakistan.The infected chili plants showed yellowish and brownish colors on the leaves as compared with healthy plants.Infected plants also showed stunted growth having small number of fruits.On PDA medium, the infected roots have shown a number of creamy to whitish creamy colonies.These isolates showed similar morphological characteristics to Fusarium oxysporum f. sp.capsici and were given the isolate name of FOC-1.
Microscopic and morphological study
After 5 days of incubation period, fluffy whitish to yellowish creamy colonies were grown with septate mycelium and hyaline frequently branched on the PDA plates.Conidia are asexual spores produced by the fungus.The fungus produced microconidia and macroconidia on the PDA plates with different size and shape, but they all are colorless.The conidiophores displayed a range of sizes and shapes, including both simple and stout and slender structures (Figures 1A-C).Microconidia of the isolates typically single celled and slightly curved with size of 5-12 × 2.3-3.5 μm.However, the macroconidia are long with 3-5 septate, bent, and slightly curved at the end of pointed with size of 27-46 × 3-4.5 μm, respectively.
FIGURE 1
Microscopic and macroscopic structures on the PDA plate of the isolated fungus Fusarium oxysporum f. sp.capsici FOC-1 from Hyderabad, Sindh, Pakistan.(A) Mycelial growth of the FOC-1 on PDA medium (B) conidiophores and (C) microconidia and macroconidia of the pathogen under microscope.The 3.0 USB camera microscope was used for chlamydospore, microconidia and macroconidia pictures.The 1-month-old chili plants infected by the FOC-1 have produced various symptoms on plants by causing partially and completed mortality as compared with un-inoculated control plants, while inoculated plants show minimum plant height, weight, stunted growth, and yellowing to brownish color wilting symptoms on the leaves (Figures 3A,B).For confirmation of FOC-1 infection, the isolate was successfully recovered on PDA medium with high frequency (Figure 3C).In addition, the highest plant height (15.270 cm) was recorded with control followed by FOC-1-inoculated plants (9.5 cm).
Molecular characterization
The internal transcribed spacer (ITS) amplification products for FOC-1 showed that all fragments were 488 bp in length (Supplementary Figure S1), and the 16 s RNA amplified products of AM13 and AM21 were 1,500 bp in length.The obtained PCR product was sequenced and compared with NCBI BLAST sequences, which has shown 99.9% sequence similarity to the GenBank Fusarium oxysporum f. sp.capsici (OM033476) and Bacillus subtilis (FJ788428 and FJ788426) sequence as the polygenetic tree shown in Figures 4A,B.The molecular study results confirmed that the FOC-1 isolate was very similar to F. oxysporum f. sp.capsici, causal agent of chili wilt disease, and AM13 and AM21 were most similar to B. subtilis.The sequences submitted in the NCBI GenBank with (OQ825980, OR775665, and OR775666) association numbers.
Plant growth promotion under greenhouse
Two Bacillus strains which had shown maximum antagonistic activity against FOC-1 in vitro were used for growth promotion and biological activity of chili plant.Both strains proved highly effective and enhanced plant parameters as compared to control.The maximum root length (16 cm), shoot length (24.74 cm), fresh weight (4.51 g), and dry weight (0.662 g) were recorded in treated plants with AM21 followed by AM13 with root length (14.93 cm), shoot length (17.12 cm), fresh weight (3.84 g), and dry weight (0.46 g), respectively.The minimum growth parameters such as root length (11.7 cm), shoot length (15.83 cm), fresh weight (1.93 g), and dry weight (0.196 g) were recorded in the control plants (Figure 5).
Biological activity
Bacterial suspension-coated seeds of Ghotki chili variety were grown in the incubator and transferred to the 8 cm thermopole pots in greenhouse.The pathogen FOC-1 suspension was applied to the roots after 15 days.In comparison to control, both bacterial strains AM13, and AM21, enhance the growth of plants and show excellent biological control against FOC-1, respectively (Table 2).
Discussion
Fungal pathogens represent a substantial menace to agriculture, crop yield, and global food production (Singh et al., 2023).The genus Fusarium is known to cause wilt disease of over 100 plant species and is ranked fifth deadliest plant pathogen (Dongzhen et al., 2020;Rampersad, 2020;Medeiros-Araujo et al., 2021;Girma, 2022).Fusarium wilt of chili ranks as the third most devastating disease affecting chili crops.Currently, agrochemical products are the The width of growth inhibition zone GIZ is as follows.Three ---represents 0 mm growth inhibition, one + represents 1-10 mm, two ++ represents 11-20 mm, three +++ represents 21-30 mm and four ++++ represents 31-40 mm growth inhibition. 10.3389/fmicb.2024.1388439 Frontiers in Microbiology 06 frontiersin.orgpredominant methods employed for disease control.Nevertheless, the excessive application of these chemicals not only poses adverse effects on the environment and human health but also targets beneficial life forms in the field (Ramesh et al., 2009;Tudi et al., 2021).Biological control methods have emerged as effective and environmentally friendly alternatives that have garnered significant attention and are rapidly being adopted to replace chemical control measures (Narasimhan and Shivakumar, 2015;Baker et al., 2020).Biocontrol agents offer the advantage of easy transfer to the field and have the potential to augment host resistance, immunity, plant growth, yield, and biomass production.Among these agents, Bacillus species stand out as widely utilized against various pathogens, renowned for their ability to enhance plant growth, induce resistance through the production of antimicrobial compounds, and generate secondary metabolites (Singh et al., 2017;Miljaković et al., 2020;Bamisile et al., 2021).In the current investigation, a survey was conducted in chili fields in the Hyderabad region of Pakistan to identify the causative agent of the disease.Using isolation techniques, the FOC-1 isolate was successfully identified from infected plant roots.Employing Koch's postulates, FOC-1 was confirmed as the pathogenic fungus which was responsible for chili wilt disease in this region.Subsequent morphological and molecular characterization validated the pathogen F. oxysporum f. sp.capsici, which causes wilt disease in Solanaceae crops, as described in the literature (Menge et al., 2020;El-Kazzaz et al., 2022).Current findings unveiled the intricate interplay between Bacillus strains and pathogenic fungus in the context of Fusarium wilt management and chili plant growth promotion.Screening of 33 bacterial strains for antagonistic activity against FOC-1 on PDA medium identified 11 strains with significant inhibition of mycelial growth.Bacillus strains AM13 and AM21 exhibited the highest antagonistic activity, with growth inhibition zones (GIZs) of 28.61 and 32.21%, respectively, while other strains showed varying degrees of effectiveness.In pathogenicity tests, FOC-1 inoculation of 1-month-old chili plants resulted in symptoms including reduced plant height, weight, stunted growth, and wilting of leaves.The highest plant height (15.270 cm) was recorded in Fusarium oxysporum and 51.02% against Alternaria alternata, which aligns with our findings of Bacillus strains, effectively inhibiting the mycelial growth of FOC-1 (Chandra et al., 2020;Kumar et al., 2020).
In vitro antagonism trial depicted that approximately 88% endophytic isolates minimized the mycelial growth of F. oxysporum (41%) as compared with R. solani (24%) and P. aphanidermatum (30%).The application of bacterial endophytes reduced disease incidence by 70% and improved the fresh biomass of roots (2.33-fold) and shoots (3.80-fold) compared with pathogen control plants.These numerical values support the effectiveness of Bacillus strains in enhancing plant growth parameters and combating F. oxysporum infection, consistent with our current findings (Gupta et al., 2022).In a previous research endeavor, the utilization of Bacillus subtilis CAS15 strongly advocated the current findings.Bacillus subtilis CAS15 strain exhibited a strong ability to inhibit the mycelial growth of 15 plant fungal pathogens, with rates ranging from 19.26 to 94.07%.Additionally, CAS15 significantly reduced the incidence of Fusarium wilt in pepper plants by 12.5-56.9%,indicating its potential to induce systemic resistance.Moreover, treated plants showed notable increases in height at various stages, ranging from 27.24 to 54.53% taller than controls.Furthermore, CAS15 enhanced pepper yield by shortening the time to 50% flowering to 17.26 days, increasing average fruit weight by 36.92% and boosting average yield per plant by 49.68% (Yu et al., 2011).Another study screened 59 PGPR against Colletotrichum P. rettgeri.This induced resistance was supported by higher activity levels of defense enzymes (phenylalanine ammonia lyase, peroxidase, polyphenol oxidase, and β-1,3-glucanase) (Gowtham et al., 2018).B. subtilis has been reported to produce volatile compounds such as indole acetic acid, siderophores, amylase, extracellular protease, cellulose, and β-1,3-glucanase, enhancing the defense-related enzyme activities for PPO, SOD, CAT, PAL, and LOX and growth promotion in various crops (Shasmita Swain et al., 2022;Xu et al., 2022;Yang et al., 2023).In other previous studied cases, similar findings were reported where only one bacterial strain, Bacillus subtilis APK, exhibited significant antifungal potential against the anthracnose pathogen.This Bacillus strain demonstrated decreased pathogen mycelial growth in vitro and enhanced chili seedling growth under greenhouse conditions (Kumar et al., 2021).Furthermore, a previous research has indicated that several Bacillus species can notably improve the growth and development of chili (Peña-Yam et al., 2016).These consistent results underscore the efficacy of Bacillus strains in both combating fungal pathogens and promoting the growth of chili plants.
Conclusion
Fusarium oxysporum f. sp.capsici is one the most destructive and devastating pathogens of chili crop.The present study evaluated two Bacillus strains having antagonistic activity against F. oxysporum and that ultimately lead to growth promotion in chili plant.It is recommended that these strains could be used as part of the integrated management system to provide effective control of this disease.It is also suggested to use these strains against other plant pathogens and may provide as safest management control as compared with chemical pesticides.
FIGURE 2
FIGURE 2 Antagonistic activity of two different Bacillus strains against Fusarium oxysporum FOC-1 under laboratory condition.(A) Show the front image of the dual assay plate method (B) show the opposite plate direction (C) growth inhibition percentage of 11 bacterial strains against F. oxysporum.The error bars and different letters represent the least significant difference value at p = 0.05.
FIGURE 3
FIGURE 3 Pathogenicity assay on the 1-month-old chili seedling plants to check the effect of FOC-1.(A,B) Un-inoculated control plant was observed healthy and the inoculated plant showed stunted growth, wilting symptoms, and (C) completely recovered FOC-1 isolate from infected roots of inoculated plants.(D,E) Showed plant height and weight of the control and inoculated plants.Letter and bars significantly differ (p < 0.05).
FIGURE 4
FIGURE 4Phylogenetic analysis of FOC-1 isolates from infected chili plants.(A) Fusarium oxysporum f. sp.capsici FOC-1 tree (B) Bacillus subtilis AM13 and AM21 tree.The maximum likelihood program in MEGA 11 software was used for phylogenetic tree with partial 488 bp and 1,542 bp sequences.The black dot represents the F. oxysporum FOC-1 ITS and B. subtilis AM13 and AM21 16 s sequences.
FIGURE 5
FIGURE 5Growth promotion activity of two Bacillus strains in chili plants.Letter and bars showed standard deviation and significantly differ (p < 0.05).
TABLE 1
In vitro screening of 33 bacterial strains against Fusarium oxysporum f. sp.capsici FOC-1 using the plate culture method on PDA medium.
Furthermore, greenhouse experiments demonstrated substantial disease protection, with a remarkable 71% reduction in anthracnose disease incidence observed in plants pretreated with B. amyloliquefaciens, followed by B. cepacia and
TABLE 2
Biological activity of AM-13 and AM-21 in 1-month-old chili plants against Fusarium oxysporum f. sp.capsici FOC-1.The letters and values significantly differ (p-value < 0.05).a* presented a highly effective.b* presented an effective.c* presented a moderately effective.d* presented a least effective. | 2024-05-29T15:17:15.568Z | 2024-05-27T00:00:00.000 | {
"year": 2024,
"sha1": "b8839ec26eb8f3f2abdd11ab08cc0d1e7f5d96ee",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/journals/microbiology/articles/10.3389/fmicb.2024.1388439/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ad3072d6ff47bf03cd6396a3c19c03e66e258c18",
"s2fieldsofstudy": [
"Agricultural and Food Sciences",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
248060536 | pes2o/s2orc | v3-fos-license | Thermoset/Thermoplastic Interphases: The Role of Initiator Concentration in Polymer Interdiffusion
In the co-bonding of thermoset and thermoplastic polymers, the interdiffusion of the polymers results in the formation of an interphase between them. Understanding the factors influencing the interdiffusion and the resulting interphase is crucial in order to optimize the mechanical performance of the bond. Herein, for the first time, the effect of the initiator concentration of the thermoset resin-initiator mixture on the interphase thickness of co-bonded thermoset-thermoplastic polymers is investigated. The dependence of the gelation time on the initiator concentration is determined by rheometer measurements. Differential scanning calorimetry measurements are carried out to determine the speed of cure. To co-bond the polymers, pieces of already-manufactured thermoplastic plates are embedded in a resin-initiator mixture. The interphase thickness of the co-bonded polymers is measured with an optical microscope. The results of this study show that the gelation time decreases as the initiator concentration increases. This decrease leads to a significant reduction in both interphase thickness and diffusivity. For instance, increasing the initiator/resin weight ratio from 1% to 3% reduces the gelation time by 74% and the interphase thickness by 63%.
Introduction
Fiber-reinforced polymer composites (FRPC) provide not only a high strength-toweight ratio but also exceptional properties such as high durability, stiffness, and corrosion resistance. While knowledge of the manufacturing of individual FRPC parts has reached a level of maturity, the integration and assembly of different FRPC parts is far less developed, particularly considering the co-bonding process. Co-bonding is a bonding technique in which a prefabricated part (in this case, a thermoplastic (TP) polymer) is bonded with a (neat or fiber-reinforced) thermoset polymer through a curing reaction of the thermoset resin [1][2][3][4]. The areas of application of this technique involve the bond between the pultruded profiles at the blade root, spar cap, and leading edge protection (LEP) layer and the over-infused main body of the wind turbine blade [5,6]. Although co-bonding may refer to the bonding of two parts with or without an adhesive between them [1][2][3][4]7], in this work we will focus on co-bonding without adhesives, where bonding takes place by the interdiffusion of polymers that are in contact as the curing takes place. The interdiffusion of the bonded polymers, and, subsequently, the curing of the resin result in the formation of an interphase [2,4,[8][9][10][11][12]. The size and morphology of the interphase have been shown to depend on the gelation time and viscosity of the resin, the thermodynamic affinity between the polymers, and the physical state of the thermoplastic [2,4,[8][9][10]. For instance, high levels of thermodynamic affinity may promote homogeneous mixing, whereas phase separation may take place at lower levels of affinity [9]. The gelation time and viscosity of the resin have competing effects on the interphase thickness [2]. Higher gelation times allow more time for the interdiffusion to take place, which eventually promotes an increase in the interphase thickness. Conversely, an increase in the resin viscosity hampers the diffusion of resin into the thermoplastic, leading to a lower interphase thickness.
It is of the utmost importance to investigate the influence of the aforementioned parameters on the TP-TS interphases. This is necessary in order to optimize the design of co-bonded hybrid composites, closing the knowledge gap in this field and promoting their more widespread use. To illustrate, for the accurate prediction of the residual stresses and the resulting process-induced deformations in the co-bonded composites, the size and the mechanical properties of the interphase are essential inputs [1,13]. Another reason why understanding the interphase is important is that the interphase morphology, which is influenced by processing conditions, plays a major role in the resulting bond strength [14]. As it is highly desirable to have stronger bonds in the joining of composites, determining the factors leading to optimum bond performance is crucial.
One way to control the cure speed of a resin-initiator mixture is to change the concentration of initiator in the mixture [15,16]. For instance, in vacuum-assisted resin transfer molding, the initiator concentration can be tuned to delay the gelation of the resin to allow sufficient time for the resin to fill the mold [15]. In the literature, studies on the effect of initiator concentration on the gelation time of neat resins are available [15][16][17][18]. Nevertheless, the effect of initiator concentration on the interphase morphology, which is crucial for the co-bonded parts, since the interphase morphology is also affected by the gelation time, has not been studied so far.
One benefit of changing the initiator concentration to tune the gelation time is that it allows one to control the gelation time without affecting other parameters controlling the interdiffusion. For instance, decreasing the temperature to increase the gelation time also reduces the viscosity of the resin, which has an adverse effect on the interphase thickness [2,19]. Tuning the initiator concentration eliminates this problem and, hence, enables one to investigate the effect of gelation time on the interdiffusion exclusively.
This study aims to investigate the effect of initiator concentration of a resin-initiator mixture on the interphase thickness of co-bonded unsaturated polyester resin (UPR) and polycarbonate (PC). UPR is a resin that is commonly used in wind turbine blades and PC is a TP polymer that could potentially be used for LEP applications. Initially, the gelation time of the resin at different initiator concentrations is measured and DSC tests are conducted to determine the cure behavior (cure speed and degree of cure). Resin-initiator mixtures with different initiator concentrations are co-bonded to PC and the interphase thickness is measured using optical microscopy. To investigate the diffusion kinetics, the diffusivity of the mixtures is calculated based on Fick's second law of diffusion using the measured interphase thicknesses and gelation times. Finally, the measured interphase thickness and gelation time are correlated.
Materials
The curing process of unsaturated polyester resin (UPR) is free radical chain-growth crosslinking polymerization. In this reaction, styrene is used as a crosslinking agent to link the polyester molecules [20]. The curing starts with the opening of highly reactive initiator (peroxide) molecules, which leads to free radical formation. These radicals interact with the styrene molecules to form new radicals. Eventually, the new radicals make contact with the polyester chains and open their unsaturated C=C bonds. This leads polyester chains to be linked via styrene bridges; hence, crosslinking takes place [20].
The UPR used in this study was 40-45% styrene by weight; it is used in the industry for manufacturing large parts through vacuum-assisted resin transfer molding. Methyl ethyl ketone peroxide (MEKP) was used as an initiator. UPR and MEKP were mixed in certain ratios for 3 min to obtain the UPR-MEKP mixture to be cured. In this study, the initiator concentration was varied by using different initiator/resin weight ratios ranging from 0.5% (0.5 g initiator/100 g of UPR) up to 3%.
As the TP material used for the co-bonded samples, a LEXAN™ PC plate with a thickness of 2 mm was used. This material was chosen since it was shown to have a strong thermodynamic affinity to UPR [2].
Cure Kinetics (DSC) Measurements
Isothermal differential scanning calorimetry (DSC) measurements were conducted at 25 • C using a Mettler Toledo DSC to characterize the cure kinetics of the mixtures with different initiator concentrations. Mixtures had weights of about 20 mg and the DSC scans lasted for 24 h. A total of 3 specimens were tested at the initiator/resin weight ratio of 1.5%, whereas either 1 or 2 specimens were tested at the other ratios (0.5%, 1%, 2%, 2.5%, 3%). From the DSC scans, the time the DSC peak started and reached its maximum was obtained to investigate the speed of curing. In addition, to gain more insight into the cure behavior, the heat and degree of cure were calculated.
Gelation Time Measurements
Gelation time measurements of UPR-MEKP mixtures with different initiator/resin weight ratios were carried out utilizing an Anton Paar-Physica MCR 501 rheometer in "plate-plate" oscillatory mode. Plates had a diameter of 25 mm with a spacing of 0.5 mm between them. A strain of 1% and a frequency of 1 Hz were used. Tests were carried out isothermally at 23 • C. Storage and loss moduli were recorded, from which the gelation time was obtained as the time when the storage modulus equaled (and subsequently exceeded) the loss modulus. A total of 3 specimens were tested at initiator/resin weight ratios of 1.5%, 2%, 2.2%, and 3%; 2 specimens were tested at ratios of 1% and 2.5%.
Co-Bonding
The co-bonding of TP and TS polymers was carried out by embedding pieces of TP plates in the TS resin in cylindrical cups 25 mm in diameter. Initially, pieces of PC plates with dimensions of 18 mm × 18 mm were cut making use of a paper cutter. Later, each of the TP plate pieces was placed in the middle of the cylindrical cups with the help of metallic holders, as shown in Figure 1. Finally, the resin-initiator (UPR-MEKP) mixtures with various initiator/resin weight ratios were poured on the TP pieces up to a height of about 20 mm. The TP pieces embedded in resin were left for curing at room temperature for 24 h. At least three specimens were prepared per initiator/resin weight ratio (at 1%, 1.5%, 2%, 2.5%, 3%), except at 0.5%, where two specimens were prepared.
Polymers 2022, 14, x FOR PEER REVIEW 3 of 9 As the TP material used for the co-bonded samples, a LEXAN™ PC plate with a thickness of 2 mm was used. This material was chosen since it was shown to have a strong thermodynamic affinity to UPR [2].
Cure Kinetics (DSC) Measurements
Isothermal differential scanning calorimetry (DSC) measurements were conducted at 25 °C using a Mettler Toledo DSC to characterize the cure kinetics of the mixtures with different initiator concentrations. Mixtures had weights of about 20 mg and the DSC scans lasted for 24 h. A total of 3 specimens were tested at the initiator/resin weight ratio of 1.5%, whereas either 1 or 2 specimens were tested at the other ratios (0.5%, 1%, 2%, 2.5%, 3%). From the DSC scans, the time the DSC peak started and reached its maximum was obtained to investigate the speed of curing. In addition, to gain more insight into the cure behavior, the heat and degree of cure were calculated.
Gelation Time Measurements
Gelation time measurements of UPR-MEKP mixtures with different initiator/resin weight ratios were carried out utilizing an Anton Paar-Physica MCR 501 rheometer in "plate-plate" oscillatory mode. Plates had a diameter of 25 mm with a spacing of 0.5 mm between them. A strain of 1% and a frequency of 1 Hz were used. Tests were carried out isothermally at 23 °C. Storage and loss moduli were recorded, from which the gelation time was obtained as the time when the storage modulus equaled (and subsequently exceeded) the loss modulus. A total of 3 specimens were tested at initiator/resin weight ratios of 1.5%, 2%, 2.2%, and 3%; 2 specimens were tested at ratios of 1% and 2.5%.
Co-bonding
The co-bonding of TP and TS polymers was carried out by embedding pieces of TP plates in the TS resin in cylindrical cups 25 mm in diameter. Initially, pieces of PC plates with dimensions of 18 mm × 18 mm were cut making use of a paper cutter. Later, each of the TP plate pieces was placed in the middle of the cylindrical cups with the help of metallic holders, as shown in Figure 1. Finally, the resin-initiator (UPR-MEKP) mixtures with various initiator/resin weight ratios were poured on the TP pieces up to a height of about 20 mm. The TP pieces embedded in resin were left for curing at room temperature for 24 h. At least three specimens were prepared per initiator/resin weight ratio (at 1%, 1.5%, 2%, 2.5%, 3%), except at 0.5%, where two specimens were prepared.
Interphase Thickness Measurements
The interphase thickness was measured using a Keyence VHX-7000 digital optical microscope equipped with a VH-100UR lens. Before microscopy, the co-bonded samples were polished using a Struers Tegramin 30 polisher. The polishing procedure involved grinding with SiC paper (500, 1000, 2000, and 4000 grits, respectively) and polishing later using several different polishing clothes with diamond solutions. Final polishing was obtained using an MD-Chem cloth with a colloidal silica suspension. During the Polymers 2022, 14, 1493 4 of 9 measurements, the interphase thickness was obtained from the middle point of the crosssection, as marked in Figure 1, where the interphase thickness reached a maximum (as it was away from the metallic clamps that prevented interdiffusion).
Diffusivity of UPR into Thermoplastic Polymers
The diffusivity of the UPR-MEKP mixture into PC was estimated to evaluate the diffusion kinetics at different initiator initiator/resin weight ratios. It was shown by Zanjani, J.S.M., Baran, I. and Akkerman, R. [2] that the diffusion of UPR into PC is Fickian. Fick's second law of diffusion, which was used to model the diffusion kinetics, is as follows: where C is the concentration of the diffusing species and D is the diffusivity (diffusion coefficient) [21]. The diffusivity D is a proportionality factor between the mass flux and the concentration gradient [21]. In other words, the larger the D is, the larger the mass flux is, given a certain concentration gradient. Assuming that D is constant, C at a certain time and location can be solved when D is known and the boundary conditions are input [4,21]. In our case, the interphase thickness (diffusion length) and the gelation time are known, while D is unknown. Hence, assuming that the interphase development stops at the gelation time [22] and using the gelation time and interphase thickness measurements, the diffusivity D can be calculated, as presented in detail in [4].
Cure Kinetics and Gelation Time
DSC curves of UPR-MEKP mixtures with different initiator/resin weight ratios are shown in Figure 2. All curves exhibit exothermic peaks, which correspond to the heat flow resulting from the curing reaction. According to the figure, the peaks start and reach their highest point earlier as the initiator/resin weight ratio is increased, which means that curing takes place faster. The heat of the cure is calculated as the area below the DSC peaks. For this, initially, a baseline is drawn for each DSC curve. After constructing the baselines and calculating the area between the DSC curves and the baseline, the heat of the cure is calculated, the values of which are presented in Table 1. The DSC tests carried out at the initiator/resin weight ratio of 1.5%, involving multiple specimens, had a very low scatter, showing the good repeatability of the test. Note that for the mixture with 0.5% initiator, curing was not completed (see the incomplete peak corresponding to 0.5% in Figure 2); hence, the heat of the cure was not calculated for this case. Considering the other cases, the heat of cure was seen to increase with an increase in the initiator/resin weight ratio, which is agreement with the findings of Vilas et al. [17]. This means that a higher degree of cure is reached at higher initiator/resin weight ratios, where the normalized degree of cure for different initiator/resin ratios is calculated as the ratio of the heat of cure at a certain concentration to that at the concentration of 3% (Table 1). While the initiator/resin weight ratio of 1% led to a significantly low degree of cure (0.58), for higher ratios high degrees of cure above 0.90 were obtained for all cases.
The acceleration of curing was also confirmed by the gelation times measured in rheometer tests at a wide range of initiator/resin weight ratios. Figure 3a presents the evolution of storage and loss modulus with time from these measurements. It can be observed that the crossover time of the storage and loss modulus, which is a commonly used indicator of gelation time, is lower at a higher initiator/resin weight ratio. The gelation times taken from these measurements are plotted against the initiator/resin weight ratio in Figure 3b. It can be seen that the gelation time decreases with an increasing initiator/resin weight ratio, with the decrease being more dramatic at lower ratios. This is in agreement with the previous observations made by Kuppusamy and Neogi [15]. To illustrate, according to Figure 3b in the initiator/resin weight ratio range of 1-1.5%, while the decrease is only 6% (from 1.21 h to 1.14 h) in the initiator/resin ratio range of 2.5-3%. The acceleration of curing was also confirmed by the gelation times measured in rheometer tests at a wide range of initiator/resin weight ratios. Figure 3a presents the evolution of storage and loss modulus with time from these measurements. It can be observed that the crossover time of the storage and loss modulus, which is a commonly used indicator of gelation time, is lower at a higher initiator/resin weight ratio. The gelation times taken from these measurements are plotted against the initiator/resin weight ratio in Figure 3b. It can be seen that the gelation time decreases with an increasing initiator/resin weight ratio, with the decrease being more dramatic at lower ratios. This is in agreement with the previous observations made by Kuppusamy and Neogi [15]. To illustrate, according to Figure 3b, the gelation time decreases by 47% (from 4.30 h to 2.27 h) in the initiator/resin weight ratio range of 1-1.5%, while the decrease is only 6% (from 1.21 h to 1.14 h) in the initiator/resin ratio range of 2.5-3%. Figure 2. DSC curves of UPR-MEKP mixture for different MEKP/UPR weight ratios. The acceleration of curing was also confirmed by the gelation times measured in rheometer tests at a wide range of initiator/resin weight ratios. Figure 3a presents the evolution of storage and loss modulus with time from these measurements. It can be observed that the crossover time of the storage and loss modulus, which is a commonly used indicator of gelation time, is lower at a higher initiator/resin weight ratio. The gelation times taken from these measurements are plotted against the initiator/resin weight ratio in Figure 3b. It can be seen that the gelation time decreases with an increasing initiator/resin weight ratio, with the decrease being more dramatic at lower ratios. This is in agreement with the previous observations made by Kuppusamy and Neogi [15]. To illustrate, according to Figure 3b, the gelation time decreases by 47% (from 4.30 h to 2.27 h) in the initiator/resin weight ratio range of 1-1.5%, while the decrease is only 6% (from 1.21 h to 1.14 h) in the initiator/resin ratio range of 2.5-3%. values and error bars represent ± one standard deviation (no error bar at 1% and 2.5%, since only 2 specimens were tested at these ratios).
Interphase Morphology
The interphase between the co-bonded PC and UPR-MEKP mixtures with initiator/resin weight ratios of 0.5%, 1.5%, 2.5%, and 3% can be seen in the optical micrographs shown in Figure 4. In all micrographs, it can be seen that the interphase comprises two regions, one with a dark color and the other one with a lighter color and a pit-like morphology. When the dark region is focused during microscopy analysis, a single phase is observed, signifying that the mixing is homogeneous. On the other hand, the other region exhibits a nodular, phase-separated morphology resulting from the limited solubility of PC in the UPR-MEKP mixture [2]. The interphase thicknesses measured from the optical micrographs are plotted against the initiator/resin weight ratio in Figure 5a. Figures 4 and 5a both demonstrate that the interphase thickness decreases nonlinearly with the initiator/resin weight ratio.
Interphase Morphology
The interphase between the co-bonded PC and UPR-MEKP mixtures with initiator/resin weight ratios of 0.5%, 1.5%, 2.5%, and 3% can be seen in the optical micrographs shown in Figure 4. In all micrographs, it can be seen that the interphase comprises two regions, one with a dark color and the other one with a lighter color and a pit-like morphology. When the dark region is focused during microscopy analysis, a single phase is observed, signifying that the mixing is homogeneous. On the other hand, the other region exhibits a nodular, phase-separated morphology resulting from the limited solubility of PC in the UPR-MEKP mixture [2]. The interphase thicknesses measured from the optical micrographs are plotted against the initiator/resin weight ratio in Figure 5a. Figures 4 and 5a both demonstrate that the interphase thickness decreases nonlinearly with the initiator/resin weight ratio. Interphase thickness is known to be influenced by the physical state of the TP (for instance, its temperature with respect to the glass transition temperature), the temperature-dependent viscosity, and the gelation time of the resin [2,4,10,19]. In this study, since the temperature and the TP material used were the same for experiments with different initiator/resin weight ratios, the first two factors were not effective in the observed interphase thickness vs. initiator ratio trend. Nevertheless, as also shown in Figure 3b, gelation Interphase thickness is known to be influenced by the physical state of the TP (for instance, its temperature with respect to the glass transition temperature), the temperaturedependent viscosity, and the gelation time of the resin [2,4,10,19]. In this study, since the temperature and the TP material used were the same for experiments with different initiator/resin weight ratios, the first two factors were not effective in the observed interphase thickness vs. initiator ratio trend. Nevertheless, as also shown in Figure 3b, gelation time was highly reduced with an increasing initiator/resin weight ratios, which is considered to be the main reason for the decrease in interphase thickness. A decrease in gelation time allows less time for interdiffusion to take place, eventually leading to a lower interphase thickness [2]. In Figure 5a, the steepest decrease in the interphase thickness (by 37%, from 598 µm to 378 µm) can be observed in the initiator/resin weight ratio range of 1-1.5%, which corresponds to the range where the steepest decrease in gelation time took place in Figure 3b. The peak times of the DSC curves shown in Figure 2, which show a similar trend to the gelation times shown in Figure 3b, also agree with the trend of interphase thickness change with the initiator/resin ratio shown in Figure 5a. Although the degree of cure is low (0.58) for the mixture with initiator/resin weight ratio of 1% (Table 1), this is not thought to have contributed to the high interphase thickness via a lower viscosity of the mixture. This is because the interphase thickness evolution ceases at gelation [22,23], which takes place at a far lower degree of cure (0.15 [1] for the material system studied; it is assumed to be constant for different cure conditions based on [24,25]). Furthermore, at gelation, the complex viscosities of the resin-initiator mixtures with different initiator/resin weight ratios measured by the rheometer were in the range of 10-100 Pa·s for all initiator ratios, which is a small range considering the fast increase of viscosity at gelation. In future work, in situ interphase development should be studied for resin-initiator mixtures with different initiator/resin weight ratios to verify the cessation of interdiffusion for the material studied in this work.
Using the gelation times shown in Figure 3b and the interphase thicknesses shown in Figure 5a, the diffusivity of the resin into PC was calculated based on Equation (1), which is shown in Figure 5b. It can be seen that the diffusivity also decreases with an increase in the initiator/resin weight ratio, similar to the interphase thickness. In fact, the trends of diffusivity vs. initiator/resin weight ratio ( Figure 5b) and interphase thickness vs. initiator/resin weight ratio (Figure 5a) can be seen to be quite similar. A decreasing diffusivity with an increasing initiator/resin weight ratio shows that higher ratios are less favorable for the diffusion of resin.
Conclusions
In this study, the effect of the initiator (MEKP) concentration of the resin-initiator mixture (initiator/resin weight ratio) on the interphase thickness of the co-bonded TP-TS (UPR-PC) was investigated for the first time. Gelation time was found to decrease with an increasing initiator concentration. Co-bonded polymers with higher initiator concentrations showed lower interphase thicknesses, which correlated well with the decrease seen in the gelation time. Investigating the effect of initiator concentration on the diffusion kinetics, the diffusivity of the resin was also seen to decrease with an increasing initiator concentration, showing a similar trend to that of the interphase thickness. Varying the initiator concentration helped us to exclusively investigate the effect of gelation time on the interphase thickness.
In future work, we recommend studying other TPs to check if the trend of decreasing interphase thickness with an increase in initiator concentration can be observed as well. Considering that the physical state of the thermoplastics and their affinity toward the thermoset resin both play a significant role in the interphase formation [2,8,9], investigating thermoplastics with different affinities and physical states is recommended. Furthermore, the method used for controlling the interphase thickness by varying the initiator concentration paves the way for investigating the effect of interphase thickness on the processing-induced deformations of the co-bonded TP-TS composites [3].
Data Availability Statement:
The data presented in this study are available on request from the corresponding author. | 2022-04-10T15:08:29.697Z | 2022-04-01T00:00:00.000 | {
"year": 2022,
"sha1": "8b8db7b7d30d48e66255499bbbb57a1e512848b0",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-4360/14/7/1493/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "96e06b66769f19b3c15aa08dc5f99919568dd878",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
110017312 | pes2o/s2orc | v3-fos-license | A New Procedure for Nonlinear Statistical Model Extraction of GaAs FET-Integrated Circuits
: A new statistical nonlinear model of GaAs FET MMICs which allows the representation of distance-dependent technological parameter variations by means of equivalent circuit parameters, and an automatic extraction procedure, are presented. The capability to reproduce statistical distribution has been successfully checked on S parameters at different distances in the 1–50 GHz frequency range.
I. INTRODUCTION
MMIC design with short-length III-V technologies requires accurate statistical models to represent electrical performance variations of active devices due to process parameter dispersions. On-chip process parameters variations affect the uniformity of device parameters (such as threshold voltage [1] and saturation current of active devices), thus producing significant lowering of the overall circuit yield; DC-coupled gain stages, for instance, are very sensitive to biaspoint variations due to threshold voltage nonuniformity. The availability of statistical model libraries, which take into account of variations in nonlinear behavior of the active device, allows the use of yieldoriented design techniques [2] such as design centering [3][4] to evaluate and optimize circuit yield. Different types of linear and nonlinear statistical models have been developed in recent years, based on a physical description of MESFET and HEMT devices, on measurement databases, or on empirical equivalent circuits.
The physical modeling approach [5][6][7] is convenient for well-established technological processes characterized by accurate physical models. For instance, in [6] a MESFET physical large-signal model has been developed and implemented in a simulator to evaluate circuit yield by means of the Monte Carlo method. Process parameters are considered as random variables with a multivariate Gaussian distribution.
Measurement-based models have been also developed and extensively used for linear characterization of active devices. The most popular measurementsbased model is the "Truth model" [8], implemented in the EEsof CAD tools. The active device is a random variable, represented by means of the S-parameters matrix, and each realization is extracted from a database of measurements during circuit yield evaluation. A large database of S-parameter matrices extracted for different FET sizes and bias points is required to obtain an accurate statistical model. Reduction of the measurements database was achieved in [9] with the application of principal component analysis (PCA), which allows the use of uncorrelated variables; with estimation techniques [10] of the probability density function based on interpolative models such as kernel density estimation and data clustering. Moreover, the "Truth" modeling approach has been used in [11] to develop a measurement-based nonlinear FET model.
In empirical models [12], circuit parameters are statistically correlated: even if variations of technological parameters (such as geometrical dimensions and doping densities) of MESFET and HEMT devices are uncorrelated, several empirical parameters are affected by each of them and therefore have to be considered as correlated. Principal component analysis (PCA) has been successfully used to develop both a linear model [13] and a modeling technique [14] to extract nonlinear empirical statistical models.
In conclusion, physical nonlinear models are easier to be extracted, being uncorrelated the statistical model parameters (i.e., the device geometry, the doping density, and so on). On the other side, even if technological parameter variations are uncorrelated, each of them affects several parameters of an empirical model: therefore, they have to be considered as correlated and a further effort has to be produced in order to extract the covariance matrix. However, for microwave and millimeter-wave applications involving III-V sub-micron FET technologies, empirical models are preferred because of their greater accuracy. In previously proposed empirical models no correlation is considered between parameters of different devices on the same chip: distance-depending correlation between devices on the same chip is neglected, so producing less accurate evaluation of the circuit yield. In [15], a statistical characterization of GaAs HEMT devices was presented in which the covariance matrix elements of the DC section (i.e. the drain current and the threshold voltage parameter V T0 of the Raytheon model) of the nonlinear model were considered as functions of the distance between devices.
In [16] we have dealt with statistical correlation between on-chip process parameters as a function of the distance between devices. The covariance matrix elements of the models of two devices were considered as functions of the mutual distance between devices. In this article, the statistical characterization proposed in [16] has been used to compose the statistical nonlinear model of a given MMIC composed of several FET devices. A procedure to extract the model parameters from circuit topology is also presented. The active portion of the MMIC is composed of a nominal nonlinear model of the FET device, and a covariance matrix which contains the correlation coefficients among the em-pirical parameters of the different devices in the MMIC. A significant feature of the proposed model is that each element of the covariance matrix is evaluated as a function of the distance on the chip between the corresponding devices. In sections II and III, the new model and a procedure that allows automatic extraction of model parameters are presented. In section IV a validation procedure of both the proposed model and extraction algorithm is shown and checked by using a PML-D02AH GaAs HEMT monolithic process.
II. THE MMIC STATISTICAL MODEL
The statistical model of the MMIC comprises a nominal nonlinear model for the single device and a covariance matrix which accounts for statistical correlation among the active devices as a function of their mutual distances on the MMIC. The passive elements of the MMIC are not considered as statistical, because of their lower variations with respect to variations of the active devices. The nominal model, shown in Figure 1, is composed of a set of bias-independent parameters (i.e., the extrinsic parameters, the capacitor Cdc, and the channel transit time ), and a set of empirical nonlinear functions (Ids DC , Ids RF , Cgs, Cds, Cgd, Ri) of the instantaneous voltages Vgsi and Vdsi [17]. The statistical nonlinear model of the device is composed of both the nominal model and a set of M correlated gaussian variables, listed in Table I. A sensitivity analysis of the functions Ids DC , Ids RF , Cgs, Cds, Cgd, Ri versus their empirical parameters, has been carried out to choose the most significant M empirical parameters to be considered as random variables: in the present case M ϭ 11 parameters have been chosen [16]. The statistical nonlinear model of an MMIC containing N FETs, comprises N ϫ M random variables and is expressed in terms of their (N ϫ M) ϫ (N ϫ M) covariance matrix in which the cross-correlation between parameters of two different devices is a function of the mutual distance between them. In particular, the elements of the cross-correlation matrix have been evaluated in a discrete set of distance values, as will be described in section III. If the actual distance d i between active devices is not comprised in the set of distance values, the crosscorrelation matrix extracted for the distance closer to d i is chosen. In section III, a procedure to extract model parameters from a population of measured devices is presented.
III. THE EXTRACTION PROCEDURE
The statistical model of an MMIC composed of several FET devices is extracted from a database of Ids and S-parameter measurements performed on a test chip containing devices with the same geometry. The reticular distance of the test chip is the minimal distance value d min at which the cross-correlation matrix is evaluated. The cross-correlation matrix is also evaluated at all the distances d i , obtained as follows: where n and m are integer numbers ranging from 0 to a maximal value d max , depending on test-chip dimensions.
The reticular distance d min in the test chip has to be chosen accordingly with the minimal distance among active devices obtained in the particular designed MMIC. However, the same test chip can be used to statistically characterize several designed MMICs, and in this case d min has to be chosen as equal to the minimal distance allowed by the technological process.
A critical step in MMIC model extraction is the use of optimization routines for the evaluation of the statistical nonlinear model of the single device. Large multidimensional error functions produce several local minima close to the global minimum, and the algorithm cannot properly converge to the global minimum. Moreover, if the error function is not very sensitive to a certain fitting parameter Pi, a final low value of the error function can be obtained with Pi values very different from one other; as a consequence, the noise introduced during the optimization process prevents to correctly determine the statistical distribution of the parameter Pi from a set of transistors. Therefore, we have used decomposition-based optimization algorithms [18 -19] and a proper selection of the empirical statistical parameters to extract the nominal model of the active device. Both the bias-independent parameters and the fitting parameters of the nonlinear functions in the model are extracted with an automatic procedure [17,20] which makes use of simulated annealing (SA) [21][22] and gradient-optimization routines. A sensitivity analysis has been performed for each of the six nonlinear functions as suggested in [19], and the optimization routines have been modified to include parameter partition: at each step, the dimension of error function is decreased and a lower probability of entrapment in local minima is obtained. The overall extraction procedure of the MMIC model is shown in Figure 2 and will be described in detail. Note that the extraction procedure in Figure 2 is not dependent on the nonlinear equivalent circuit chosen for the active device.
A. FET Nominal Model Extraction
The nonstatistical empirical parameters of the model are determined by extracting a nonlinear model for the device at the center of the chip. Then, a database of DC and RF nonlinear models is extracted for each transistor on the test chip different from the center device, by optimizing the M statistical parameters and keeping unchanged the nonstatistical parameters. A decomposition-based optimization routine has been used to determine the parameters of the static empirical function Ids DC . Then, mean values and standard deviations of the fitting parameters considered as statistical variables are evaluated from the database of nonlinear models. Finally, the nominal model of the single device is obtained substituting in the preliminary nominal model the mean values calculated for the M statistical parameters.
B. Distance-Dependent Covariance Matrix Evaluation
The set of all possible distances d i between devices on the test chip is determined and, for each distance, a database DB(d i ) is built comprising all couples of nonlinear models for devices at distance d i . The database DB(d i ) is used to calculate the correlation matrix of two devices at distance d i , and in particular the cross-correlation block C Mod (d i ) [16]. The autocorrelation block C Mod (0) is considered to be identical for each distance and is evaluated from the database DB(0), which comprises the nonlinear models of all the devices on the test chip.
C. MMIC Covariance Matrix Evaluation
The correlation matrix of a given MMIC is determined as follows: the diagonal blocks of the matrix are the C Mod (0) auto-correlation blocks, the crosscorrelation block between the devices T j and T k is the block C Mod (d i ), corresponding to the distance d i on the test chip closest to the distance d jϪk between T j and T k .
Statistical Model Extraction of GaAs MMICs
Validation of the covariance matrices obtained from the database measurements is performed by evaluating the confidence interval found for a given confidence level at each distance: a too large confidence interval means that an insufficient number of statistical samples has been considered and a larger measurement database is needed. Note that for higher values of d i , a lower number of couples at distance d i is found on the test chip and statistical significance of the database DB(d i ) is lowered. A maximal distance d max is found for which the database DB(d max ) (and therefore the starting database) is considered as statistical significant. Therefore, the size of the test chip determines the maximal mutual distance d max between two devices on the MMIC, which will then allow an accurate statistical model.
IV. THE VALIDATION PROCEDURE
Validation of the MMIC statistical model extraction procedure is performed by comparing the covariance matrix of test-chip-measured S parameters to the one obtained from the statistical model at each frequency and for several distances d i between transistors. Here, a hypothesis testing procedure [23] has been used to check the equivalence between mean values, standard deviations, and both auto-correlation and cross-correlation blocks of the statistical populations. In particular, the hypothesis that the correlation coefficients c Meas (d i ) (for the measured database) and c Mod (d i ) (for the model database) for transistors T i and T j at distance d i are statistically equivalent with a significance level ␣ and a given tolerance factor , has been checked as follows: The same tests to check the equivalence of the measured and model mean values and standard deviations, and a test to check the sign of corresponding correlation coefficients, have also been performed. In order to perform extraction algorithm validation without fabricating and measuring large test chips, a procedure (see the flow-diagram in Figure 3) to obtain a database of measurements of a simulated N ϫ N test chip from measurements performed on a single device is presented. As a first step, the nonlinear model is extracted from Ids and the S-parameter measurements of the starting device; then a nonlinear model comprising a set of values for the M statistical variables, considered as a sample of a multivariate gaussian variable with correlation matrix C, is determined for each transistor of the test chip. Finally, the static Ids current and S parameters are calculated for each device and used to perform the MMIC model extraction and validation. The elements of the correlation matrix C used to build the test-chip database, shown in Figure 4, are considered as a function of the distance between the devices in the test chip. The correlation matrix C is composed of both M ϫ M auto-correlation blocks C(0) and M ϫ M cross-correlation C(d i ) blocks. The auto-correlation coefficients c(0) are considered identical and equal to . The diagonal elements of the cross-correlation matrix C(d i ), for transistors T i and T j at distance d i in the chip test, are evaluated from a decreasing exponential function of the distance F(d i ), in order to take into account the S parameters' correlation drop as a function of the distance; the other elements of C(d i ) are equal to  ⅐ F(d i ).
V. RESULTS AND DISCUSSION
The validation procedure has been performed according to the flow-diagram in Figure 3, starting from Ids and S parameters (in the frequency range 1-50 GHz) stored in a 10 ϫ 10 database of GaAs HEMT device measurements. The test chip has been built starting from measurements performed on a single device (gate length 0.2 m, gate width 4 ϫ 15 m) of a PHILIPS PML-D02AH monolithic process, as reported in section IV. A reticular distance d min ϭ 75 m has been assumed for the test chip, according to the design rules of the technological process. The database is not statistically significant for distances greater than d max ϭ 420 m (corresponding to about 100 couples of transistors). The mean value and the standard deviation of the M statistical variables have been assumed to be equal to the corresponding values Pi of the starting device model and 0.1 ⅐ Pi, respectively.
The statistical model has been extracted from the test-chip database and implemented in Agilent-ADS 2001 CAD tool [24], which allows a straightforward implementation of multivariate Gaussian distributions. A Monte Carlo analysis has been performed in a fixed bias point (Vgs ϭ Ϫ0.2 V, Vds ϭ 3 V), and the correlation matrix of the devices at distance d 1 ϭ d min (corresponding to 180 couples of transistors) and d 2 ϭ 2 ⅐ d min (corresponding to 160 couples of transistors) for the extracted model has been evaluated and compared to the ones of the test chip. A cumulative level of significance ␣ ϭ 0.1 has been considered in order to perform statistical hypothesis tests for the real and the imaginary parts of S parameters in the 1-50 GHz frequency range. The hypothesis of a percentage error between measured and modeled S parameters lower than a given amount has been checked for the eight mean values and the eight variances; an error lower than 0.35 has been checked for the 28 auto-correlation and the 36 cross-correlation coefficients.
The amounts of percentage error between the measured and modeled mean values and variances, which allows the statistical test to be passed in 50% and 75% of the cases, are reported in Table II: average percentage errors of 6% and 53% (13% and 61%) allow the test to be passed in 50% (75%) of the cases for the mean values and the variances, respectively. The auto-correlation coefficients are statistically equivalent with an error lower than 0.35 in 67% of the cases and the same sign has been found in 87% of the cases; a comparison between auto-correlation coefficients of S parameter real parts is shown in Figure 5. The crosscorrelation coefficients for d ϭ d 1 and d ϭ d 2 are statistically equivalent with an error lower than 0.35 in 84% and 94% of the cases, respectively. The same sign has been found in 87% and 92% of the cases. In Statistical Model Extraction of GaAs MMICs Figure 6, a comparison between some cross-correlation coefficients for d ϭ d 1 and d ϭ d 2 is shown. The proposed statistical model can be used to perform accurate yield optimization and/or correction within techniques which make use of the design centering [3][4]. A Monte Carlo analysis has to be performed in order to determine the nominal value of design parameters that allows acceptable performance under process-parameter variation. At each Monte Carlo iteration, a different set of random variables is generated according to the statistical distribution described within the MMIC model.
The accuracy of the statistical model is therefore a crucial feature in order to successfully evaluate and optimize the yield. To the best of our knowledge, the proposed model is the first nonlinear model which is able to account for statistical distribution of parameters of several devices within a given MMIC. In particular, as the correlation is evaluated as a function of the mutual distance between devices, yield underestimation deriving from the use of uncorrelated models, as well as yield overestimation due to total matching assumption, are avoided.
VI. CONCLUSION
Most of the previous works dealing with yield evaluation of GaAs FET MMICs address the need for an accurate statistical model of the device in which process-parameter variations can be mapped. Poor attention is usually paid to the correlation between device parameters on the same chip and correlation variation as a function of the distance between devices, thus leading to an MMIC model composed of uncorrelated devices.
In this article, a nonlinear statistical model of GaAs FET MMICs has been developed, starting from the nonlinear model of FET devices presented in previous works. A multivariate Gaussian distribution is assumed for model parameters and the covariance matrix elements are considered as functions of device distances. An automatic procedure to extract the MMIC model from measurements performed on a test chip, which makes use of custom optimization routines, has been also presented. An algorithm which does not require fabrication and measurement of the test chip has been presented and used to perform the validation of MMIC model extraction procedure with a GaAs HEMT monolithic process. Results have shown that the proposed methodology is able to model S-parameter populations of a given MMIC with high level of statistical significance. Statistical hypothesis tests performed on mean value, standard deviation, and correlation coefficients have highlighted statistical equivalence between measured and modeled populations with 0.1 cumulative level of significance, and encourage the use of the model to evaluate and optimize circuit yield in MMIC design via commercial CAD tools. | 2019-04-13T13:06:47.572Z | 2003-09-01T00:00:00.000 | {
"year": 2003,
"sha1": "3215f4db5eecf7bd37eb9b161e812c44f05b72bc",
"oa_license": null,
"oa_url": "https://doi.org/10.1002/mmce.10095",
"oa_status": "GOLD",
"pdf_src": "Wiley",
"pdf_hash": "39bd494cba24e36191bdf15c1880c2b63437cbfa",
"s2fieldsofstudy": [
"Engineering",
"Physics",
"Computer Science"
],
"extfieldsofstudy": [
"Engineering"
]
} |
486402 | pes2o/s2orc | v3-fos-license | The RNA-dependent RNA polymerase essential for post-transcriptional gene silencing in Neurospora crassa interacts with replication protein A
Post-transcriptional gene silencing (PTGS) pathways play a role in genome defence and have been extensively studied, yet how repetitive elements in the genome are identified is still unclear. It has been suggested that they may produce aberrant transcripts (aRNA) that are converted by an RNA-dependent RNA polymerase (RdRP) into double-stranded RNA (dsRNA), the essential intermediate of PTGS. However, how RdRP enzymes recognize aberrant transcripts remains a key question. Here we show that in Neurospora crassa the RdRP QDE-1 interacts with Replication Protein A (RPA), part of the DNA replication machinery. We show that both QDE-1 and RPA are nuclear proteins and that QDE-1 is specifically recruited onto the repetitive transgenic loci. We speculate that this localization of QDE-1 could allow the in situ production of dsRNA using transgenic nascent transcripts as templates, as in other systems. Supporting a link between the two proteins, we found that the accumulation of short interfering RNAs (siRNAs), the hallmark of silencing, is dependent on an ongoing DNA synthesis. The interaction between QDE-1 and RPA is important since it should guide further studies aimed at understanding the specificity of the RdRP and it provides for the first time a potential link between a PTGS component and the DNA replication machinery.
INTRODUCTION
The initial, paradoxical observation that introducing extra copies of a gene could actually lead to silencing of the same gene has led to the discovery of a wide range of phenomena that all involve post-transcriptional gene silencing (PTGS) of repetitive sequences and that play roles varying from genome defence against viruses and transposons, to development and chromosomal segregation (1,2). In the last few years, the details of how homologous transcripts are either degraded or translationally repressed during PTGS have been largely worked out. A fundamental intermediate, common to all forms of PTGS, is the production of a double-stranded RNA (dsRNA) which is then processed by the RNAse III enzyme Dicer into short interfering RNAs (siRNAs) or microRNAs (miRNAs) (3,4). The subsequent processing of siRNAs and miRNAs and their mode of binding through homology to their target transcripts are widely conserved through several organisms and great progress has been made in working out the details of these processes. However, one of the few remaining open questions concerns events upstream of the production of dsRNAhow are repetitive sequences recognized as different from endogenous genes in the first place? This question is particularly important, in light of the fact that PTGS probably first evolved as a defence mechanism against invading sequences such as transposons and viruses (5,6).
In the model fungus Neurospora crassa where PTGS (known as quelling) can be induced by transgenes, it has been observed that in order to induce gene silencing, transgenic loci need to be transcribed (2). It has been suggested that an RNA-dependent RNA polymerase (RdRP) called qde-1 (7), specifically required for quelling, may use the transgenic RNA as its substrate to produce a dsRNA. This idea is supported by the fact that, in vitro, QDE-1 has been shown to be able to convert a singlestrand RNA template into dsRNA (8).
To gain insight into how the RdRP is able to specifically target only the gene to be silenced, we introduced a tagged version of QDE-1 and used this to immunopurify QDE-1containing protein complexes. Using this approach, we were able to show that QDE-1 interacts with the Neurospora homologue of the largest subunit of Replication Protein A (RPA-1), a single-stranded DNA-binding protein that is important in DNA replication, repair and recombination. Moreover, we were able to show that QDE-1 is enriched at the transgenic locus required to trigger PTGS, and that the accumulation of siRNAs is coupled to DNA synthesis. Taken together, these observations support the view that repeated sequences could be targeted for silencing as they are being replicated, and that it is during replication that these sequences are somehow identified by the cell as qualitatively different from endogenous, non-silenced genes.
A 200 bp BstZ171-BglII fragment surrounding and including the start codon of qde-1 was removed and replaced with a PCR homologous fragment modified to encode the FLAG epitope (DYKDDDDK) immediately after the ATG (primer sequences available on request).
The plasmid-encoding cmycRPA-1 (pMycRepA) was made by first subcloning from cosmid H116G2 (Fungal Genetics Stock Centre, University of Kansas) into pBluescriptSK a 4.6 kb BamH1 genomic fragment containing the RPA-1 coding sequence (NCU03606.3) and 1.6 and 1 kb of upstream and downstream sequence, respectively. To introduce the c-myc epitope at the N-terminus of the protein, a 130-bp BstB1-Pml1 fragment was removed and was substituted with a PCR homologous fragment modified to encode the c-myc epitope (EQKLISEEDL) immediately after the ATG (primer sequences available on request).
The construct (pRepAKO) for knocking out the endogenous RPA-1 gene was prepared by amplifying 1 kb of sequence from both immediately upstream and immediately downstream of the RPA-1 coding sequence. The upstream and downstream amplified regions were then cloned either side of the hygromycin resistance cassette in the previously described plasmid pCSN44 (10).
Neurospora strains, growth conditions and transformation procedure
The stably silenced Neurospora strain, 6XW, and the silencing deficient qde-1 mutant strain 107 derived from this strain have been previously described (2,11).
Growth conditions for Neurospora were essentially as described elsewhere (12). For hydroxyurea (HU) treatments, HU (0.1 M final concentration) was added to overnight-grown cultures (10 6 conidia/ml) for 4 h with shaking at 288C. Mycelia were then harvested by filtration and washed with an excess of Neurospora minimal medium (NMM) and frozen (T 0 ) or rinsed with fresh pre-warmed NMM and incubated for a further 30 and 100 min with shaking at 288C and then harvested and frozen (T 30 and T 100 , respectively). Quinic acid induction was used to increase the level of production of FLAGQDE-1. This was achieved by placing grown mycelia in 1Â Vogel's and 0.3% quinic acid for 4 h with shaking at 288C. Preparation of N. crassa spheroplasts and transformation with recombinant plasmids was performed as described by Vollmer and Yanofsky (13).
Knockout of the RPA-1 gene was achieved by transforming a Kpn1-Not1-linearized version of pRepAKO into the strain FGSC9719 (Ámus-52), which is defective in the non-homologous end-joining pathway, and thus results in a dramatic increase in the frequency of homologous recombination (14). Having verified a heterokaryotic strain containing a knockout event at the endogenous locus, we were unable to purify this strain to homokaryosis despite repeated serial transfers and microconidiation.
Immunoprecipitation (IP)
Large-scale IP was performed by homogenizing 5 g (wet weight) of ground, frozen mycelia in 15 ml lysis buffer (10% glycerol, 150 mM NaCl, 50 mM HEPES, pH 7.4). After centrifugation at 10 000 g at 48C to remove cellular debris the supernatant was incubated for 3 h at 48C on a rotating wheel in the presence of 100 ml (packed gel volume) of anti-FLAG M2 agarose resin (Sigma). The resin was then pelleted by gentle centrifugation at 1000 g and washed three times in lysis buffer followed by two washes in tris-buffered saline (TBS). The precipitated proteins were eluted from the resin with FLAG peptide (Sigma F3290) in TBS (250 mg/ml).
A similar procedure was followed in the IP of cmycRPA1 using an anti-cmyc agarose resin (Sigma E6654).
Mass spectrometry of proteins interacting with QDE-1
Immunoprecipitated proteins were resolved on a 7% T-3.3%C SDS-PAGE separating gel (1 + 18 + 18 mm), revealed by Sypro Ruby staining and visualized using a Typhoon 9200 laser scanner (GE Healthcare). Proteins were excised, digested with trypsin and MALDI-TOF/ TOF (4700 Proteomics Analyzer; Applied Biosystems) was used to obtain mass spectra, which were analysed using the GPS Explorer software (Version 1.1, Applied Biosystems) against the MASCOT search engine (Matrix Science). Scores greater than 61 were considered significant (P < 0.005).
Western blot analysis
Western blots were performed using standard procedures. Both the anti-FLAG antibody (Sigma F3165) and the anti-cmyc antibody (Sigma M4439) were used at a 1:2000 dilution. The secondary antibody was HRP-conjugated anti-mouse produced in goat (Bio-Rad) and used at 1:5000.
Preparation of nuclear extracts
Nuclei were isolated by a modification of the method described by Luo et al. (15) using freshly harvested mycelial pads (2-3 g, wet weight) in an initial lysis volume of 30 ml. A more detailed description of this method is provided in the Supplementary Data.
Chromatin immunoprecipitations (ChIP)
ChIP was carried out as described previously (16,17), with modifications which are described in supplementary methods. Essentially, conidia (10 7 ) were inoculated in 100 ml NMM and grown for 24 h and the mycelia were fixed in 2.5% formaldehyde for 10 min, with shaking. For the immunoprecipitation 15 ml packed gel volume of anti-FLAG resin was used per ml of lysed, sonicated chromatin (15 mg in total).
Quantification of immunoprecipitated DNA
Quantification was performed using a real-time PCR machine, LightCycler (Roche), with FastStart DNA Master SYBR green 1 kit (Roche). Data were analysed with built-in LightCycler software, version 3.01, using the second derivative method for determining the crossing point (Cp) value for each sample.
Transgenic DNA was amplified using the primer P2 (5 0 -GCGCGCAATTAACCCTCAC) derived from the bacterial vector sequence and the primer P1 (5 0 -AAGA GACCCGGTAGGAGGAG) from the al-1 transgene to avoid amplification of the endogenous al-1 gene DNA. The actin primers (5 0 -CCCAAGTCCAACCGTGAGAA and 5 0 -GGACGATACCGGTGGTACGA) are derived from the fifth exon sequence of the actin gene.
QDE-1 enrichment at the transgenic locus was measured as the relative increase in the amount of transgenic DNA with respect to the actin DNA between the 'IP' and 'input' samples. As a negative control we used the silenced non-FLAG strain, 6XW, from which the FLAGqde-1 strain derives and that thus contains the same array of transgenes. We assayed the enrichment in these two strains in four independent ChIP experiments and compared them using a paired Student's t-test.
Small RNA purification and northern analysis
Small RNA purification and northern analysis was performed as described previously (18).
Identification of RPA-1 as an interacting partner of QDE-1
To gain insight into how the RdRP is able to specifically target only the gene to be silenced, we placed qde-1 under control of the quinic acid inducible promoter (qa-2) and tagged the gene with an N-terminal FLAG epitope, with the aim of immunoprecipitating any interacting protein partners. We first reintroduced the tagged, cloned version of QDE-1 (FLAGQDE-1) into a qde-1 mutant strain 107, identified previously in an insertional mutagenesis screen (11). Strain 107 derives from the reference strain 6XW, which has $20 copies of the albino-1 (al-1) gene inserted in a head-to-tail tandem repeat fashion (11). Since al-1 is responsible for carotenoid biosynthesis, strains silenced in this gene are easily identified by visual inspection as white in colour, due to the absence of carotenoid production. Restoration of silencing to FLAGQDE-1-transformed strains showed that our epitope-tagged protein was fully functional (Supplementary Data, Figure S1). We then purified the FLAGQDE-1 protein from large-scale vegetative cultures using anti-FLAG conjugated agarose (Sigma), followed by competitive elution with the FLAG peptide. We analysed the eluted proteins on a 7% 1D PAGE gel and looked for bands that specifically co-purified with FLAGQDE-1, using a silenced non-FLAG strain (6XW) as a negative control ( Figure 1A). Despite some non-specific background due to crossreaction of the FLAG antibody in Neurospora, and the presence of several specific bands that turned out to be degraded or truncated versions of FLAGQDE-1, one other protein of $70 kDa consistently co-purified with FLAGQDE-1. This protein was revealed by mass spectrometry to be the largest subunit (70 kDa) of the heterotrimeric RPA complex (Supplementary Data, Figures S2 and S3). In order to further confirm the interaction between these proteins we cloned the Neurospora gene for the large subunit of RPA (rpa-1) and tagged this gene with the c-myc epitope (cmycRPA-1). In strains that expressed both FLAGQDE-1 and cmycRPA-1, by immunoprecipitating cmycRPA-1 we were able to co-purify FLAGQDE-1 ( Figure 1B).
RPA is responsible for binding and stabilizing singlestranded DNA templates and as such is an essential component not just in the replication fork but also in processes such as DNA repair and recombination (19).
The Neurospora rpa-1 is an essential gene Having identified RPA-1 as a potential novel component of the PTGS pathway we attempted to knockout this gene in a silenced background to see if silencing was relieved. Although we were able to disrupt the endogenous rpa-1 locus, these strains could only be maintained in heterokaryosis with wild-type nuclei, suggesting that the homokaryotic state was lethal (Supplementary Data, Figure S4). The lethality of the rpa-1 knockout has since been confirmed independently by the Neurospora Knockout Consortium (http://www.dartmouth.edu/ $neurosporagenome/1_s1.html). This observation is in agreement with the crucial role of RPA in replication and repair and this knockout is lethal in several other organisms (20,21). The inability to knockout rpa-1 limited our ability to test the functional importance of the RPA-1/ QDE-1 interaction. We therefore investigated whether there was additional supporting evidence to suggest that RPA-1 could play in role in silencing, through its role in either DNA replication or repair. In this vein we tested other assumptions that should be true were the interaction between QDE-1 and RPA-1 to be of functional relevance in silencing.
QDE-1 is a nuclear protein, which is specifically recruited onto transgenic repetitive loci
In humans and yeast, RPA has been shown to interact physically and functionally with the WRN and BLM RecQ DNA helicases and is essential for their DNA unwinding activity on recombination intermediates at the replication forks (22,23). Strikingly, a direct Neurospora homologue of WRN and BLM is QDE-3, another gene encoding a DNA helicase essential for PTGS that, together with QDE-1, is known to be upstream of the production of dsRNA during PTGS (18,24). Together, these previous observations and our demonstration that QDE-1 and RPA-1 interact in a complex led us to investigate the possibility that QDE-1 may be coupled with the DNA replication machinery. For this reason we first investigated whether the interaction between RPA-1 and QDE-1 might occur in the nucleus where replication takes place. The program P-SORT (25) predicts QDE-1 to be a nuclear protein, based on the presence of a bipartite sequence and we confirmed that this was the case by showing that FLAGQDE-1 was enriched in nuclear extracts (Figure 2A). We similarly confirmed that our c-myc-tagged version of RPA-1 was also nuclear (Figure 2A). In yeast it has been shown that the RdRP forms part of a complex (RDRC) that is peripherally associated with the silenced locus (26-28). The RDRC both interacts with, and is essential for the localization of, the more tightly associated RNAi-induced transcriptional silencing (RITS) complex which is guided by small interfering RNAs (siRNAs) (26,29). We performed ChIP experiments to observe whether QDE-1 was associated with the transgenic locus. We cross-linked the chromatin to fix any DNA-protein associations and immunoprecipitated QDE-1 followed by quantitative PCR to see if the protein was preferentially associated with any specific DNA sequences. After immunoprecipitation using the anti-FLAG antibody, we detected a reproducible (1.7-fold) enrichment (P < 0.01) of FLAGQDE-1 at the transgenic al-1 locus when compared to the unrelated, non-silenced endogenous actin gene ( Figure 2B). As a negative control we again included the silenced, non-FLAG strain, 6XW. Although low, our level of enrichment is comparable to that found for the RdRP in Schizosaccharomyces pombe (RdP1) at the centromeric dg and dh repeats (26). It has been shown that in Neurospora there is a tight correlation between transcription of the transgenic array and the efficiency of silencing, with the RNA, being produced from the transgenes, proposed to be aberrant and converted by the RdRP to a dsRNA (2). Our demonstration that the RdRP QDE-1 is enriched at the transgenic locus suggests that this conversion step happens in situ as the aberrant RNA is produced.
siRNA accumulation depends on an ongoing DNA synthesis
In the absence of an rpa-1 null allele, we tested whether a functional interaction between QDE-1 and RPA-1 might exist through the latter's role in DNA replication. To do this we treated Neurospora cultures with HU, a specific inhibitor of DNA synthesis (30). We observed that treatment with HU abolishes the accumulation of siRNAs, (Figure 3) indicating that an ongoing DNA synthesis is required for triggering the silencing machinery. The accumulation of siRNAs was fully restored 100 min after FLAGQDE-1 ('FLAG'). As a negative control a silenced, non-FLAG strain was included ('6XW'). Mass spectrometry analysis of bands specific to the FLAG sample revealed that RPA-1 (13 peptides, 29% sequence coverage) specifically co-purifies with QDE-1. Bands marked with à (asterisk) were identified as degradation products of QDE-1. (B) The interaction between QDE-1 and RPA-1 was confirmed by constructing strains that contained both a cmyc-tagged version of RPA-1 and FLAG QDE-1. In these strains IP of cmycRPA-1 also copurified FLAGQDE-1, and vice versa. removing the HU from the cultures, suggesting a direct and reversible effect of HU. Previous results showed that the direct expression of a hairpin dsRNA from an inverted repeat efficiently elicits silencing, bypassing the requirement of both qde-1 and qde-3 (24,31). Strikingly, the accumulation of siRNAs produced in a strain similarly expressing a hairpin dsRNA is not affected by HU, indicating a qualitative difference between the formation of siRNAs that result from a tandem transgene and those that result from an inverted repeat (Figure 3). Although we cannot exclude an indirect link between replication arrest and the disappearance of siRNAs, our data would suggest that it is only the phases of RNA silencing concerned with recognition of the repeated sequence and production of a dsRNA, mediated by QDE-3 and QDE-1 that are linked to DNA replication.
DISCUSSION
In our attempts to better understand how the silencing of transgenes is initiated in Neurospora, we identified RPA-1, a DNA-binding protein essential for both DNA repair and replication, as an interacting partner of QDE-1, the RdRP responsible for producing the dsRNA intermediate required in PTGS. This finding is significant since it has implications for the possible mode of action of QDE-1. Since RPA-1 is a nuclear protein we might also expect QDE-1 to be so, given their interaction. When we investigated the localization of QDE-1 we found that not only was it nuclear, but that it was also enriched at the transgenic tandem repeats that are required to trigger silencing. Thus the most likely scenario is that QDE-1 acts on the transgenic locus in situ to produce a dsRNA. A similar situation exists in the fission yeast, where the RdRP is also enriched at repeated loci that trigger silencing, yet in this case the localization of the RdRP to the locus is dependent on another complex, RITS, which is guided by siRNAs homologous to the target locus (26). There is currently no evidence in Neurospora . The accumulation of transgene-specific siRNAs correlates with DNA replication. After HU treatment of mycelia to inhibit DNA replication siRNAs were extracted at various timepoints (from 0-100 min, T 0 -T 100 ) following removal of HU. The accumulation of al-1 siRNAs is abolished in a strain where silencing is induced by a transgenic array (6XW) but not in a strain where silencing is induced in a qde-1/qde-3-independent fashion by direct expression of a hairpin dsRNA with homology to the al-1 gene (pIR). By 100 min after HU release siRNAs in 6XW were restored to the normal level found in non-treated mycelia (NT). As a loading control an ethidium bromide stain of total low-molecular-weight RNA (RNA) is shown.
for a siRNA-directed complex similar to RITS that targets repeat sequences. Indeed assembly of heterochromatin seems to be siRNA-independent in this organism (16,32). We therefore considered the possibility that the enrichment of QDE-1 at repeat sequences could be mediated through its interaction with RPA-1, rather than through siRNAs. Our ability to test this hypothesis was limited by the lethality of the rpa-1 null allele, which prevented us from directly assaying the functional role of this gene. Faced with this problem we decided to test our hypothesis in an indirect fashion by blocking one of the processes for which rpa-1 is essential replication-and monitoring the effect on silencing. We showed that in the case of the silenced transgenic tandem repeats, where the accumulation of siRNAs, the hallmarks of silencing, is dependent on both QDE-1 and the RecQ DNA helicase QDE-3, this accumulation was also blocked in the absence of replication. On the other hand, and importantly, in a strain producing a dsRNA directly as a hairpin, where accumulation of siRNAs is both QDE-1-and QDE-3independent (33), blocking replication had no effect on the levels of siRNAs. This finding suggests that DNA synthesis is only coupled to RNA silencing through nuclear events mediated by QDE-1 and QDE-3, which are concerned with the recognition and transcription of repetitive DNA elements.
In summary, we have shown that QDE-1 interacts with RPA, that QDE-1 can reside at the transgenic locus, and that the accumulation of siRNAs is correlated with DNA synthesis. These findings, coupled with the previous observation that in other organisms RPA interacts with, and is necessary for the function of, the direct homologue of the QDE-3 DNA helicase required for transgene silencing in Neurospora (23), can provide several insights into the possible nature of how the cell recognizes in the first instance a transgenic repeat which is to be silenced. Based on the above, we propose that it is during the act of replication that transgenic sequences are recognized by QDE-1, through its interaction with RPA-1, as different from endogenous genes and are thus targeted for silencing by the in situ production of a dsRNA. However, since RPA-1 must bind to the entire genome at some point during DNA replication the question remains of what distinguishes transgenic sequences as different from the rest of the genome during replication? Here a clue could lie in the requirement in Neurospora for sequences to be inserted in tandem in order to trigger silencing (2,34). We speculate that repeated sequences in close proximity could be recognized at the time of replication through their ability to pair with each other, perhaps forming unfavourable intermediates. It is known that tandem repeats, during DNA replication, frequently may form a slippage intermediate or 'slipped' misalignments during recombination at stalled replication forks, both of which are cruciform-like structures that have to be resolved in order to allow progression of the fork (35)(36)(37)(38). In different organisms RPA, together with homologues of QDE-3 the RecQ DNA helicase, has been found to be required in promoting the progression of the replication fork by resolving such cruciform structures and preventing genome instability (23,39,40).
While several questions remain unanswered in our speculative model, including the source of initial RNA which QDE-1 converts into a dsRNA, it is nonetheless attractive in that it goes someway to answering longstanding questions related to the triggering of transgeneinduced gene silencing, offering a silencing mechanism that relies on the detection of one of the intrinsic characteristics of these sequences: their repetitiveness. Such a system could potentially function to silence other repetitive elements including also non-IR transposons, viruses and certain chromosomal duplications. Interestingly, and supporting our model of RPA in marking repeated sequences is the recent identification of RPA mutants in Arabidopsis which are defective in the transcriptional repression of a subset of transposons, suggesting that our model could be applicable across a range of species (41,42). | 2014-10-01T00:00:00.000Z | 2007-11-29T00:00:00.000 | {
"year": 2007,
"sha1": "4de9c4af7f39b16936fb8770f0a6eb9332e7f11f",
"oa_license": "CCBYNC",
"oa_url": "https://academic.oup.com/nar/article-pdf/36/2/532/14120146/gkm1071.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "819f48c0a1935a5638ad9011094c4dd29c478404",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
252631064 | pes2o/s2orc | v3-fos-license | Comparative Study on Sediment Delivery from Two Small Catchments within the Lena River, Siberia
: This paper studies the possibility of using the WaTEM/SEDEM model to assess sediment yield from the catchment area within the Lena River catchment. The study was carried out based on a comparison of predicted data and measured data of the suspended sediment yield at the gauging stations of the state monitoring network of Russia. The study was performed within two areas, with plain and mountainous relief. The first site is located within the catchment area of the river Chara with an area of 4150 km 2 . The second site rests on the catchment area of the Lena River between the Tabaginskiy and Kangalassky capes near Yakutsk city. The catchment area of this site is 15,740 km 2 . The values of sediment yield from the “Yakutsk” catchment area are in much better agreement with the values of the measured sediment yield values than in the “Chara” catchment area. The predicted sediment yield from the study area remained almost unchanged from the period 1986–2019 and amounted to 3.5 t/km 2 , while the suspended sediment yield in the Lena at the Tabaga gauging station slightly increased from 7 to 9.45 t/km 2 per year.
Introduction
The Lena River is the largest river in the Arctic, of which most of the catchment area is located in the zone of permafrost.Modern climate change and permafrost degradation have led to a change in the flow of water, sediment, and chemicals into the Arctic Ocean [1].The dynamics of sediment yield from the Lena River catchment deserve special attention, as it affects many processes, such as the rate of siltation of reservoirs and the mass of incoming pollutants.At the same time, sediment yield from the catchment area is an indicator of the intensity of the erosion processes in the catchment area.It is necessary to apply both modeling methods and field instrumental assessment methods since each of them have its strengths and weaknesses when analyzing the sediment yield of each area.Wherein, it is impossible to estimate the contribution of the catchment component from a large area to the sediment yield of the river without the use of erosion models.
Currently, there are different classifications of erosion models that are used, among other things, to assess the suspended sediment yield.Most researchers distinguish conceptual, empirical, and physically based models [2].
The empirical models are based on observations of the environment that can be statistically quantified without a detailed description of the causes of a physical process [3].Examples of empirical models are USLE [4], RUSLE [5], and MUSLE [6].Physically based models are based on the physics of flow and sediment transport processes and their interaction with the transfer of mass, momentum, and energy [7].Examples of physically based models are: WEPP [8]; LISEM [9]; EROSION 3D [10]; EUROSEM [11].
The studied territory of the Lena River catchment has repeatedly become the object of study of erosion processes.Here we can mention works performed by: employees of the Russian Academy of Sciences [19], Moscow State University [20], and Kazan Federal University [21].In addition, soil erosion was assessed in the Lena River catchment as part of the larger project "An assessment of the global impact of 21st-century land use change on soil erosion" [22].However, within the framework of all these studies, soil losses within river catchments were performed without taking into account the process of accumulation of part of the eroded material or without assessing the suspended sediment yield from the catchment area in the river.In addition to erosion losses, it is necessary to know how this material will accumulate along the path of sediment transport from slopes to the hydrographic network for a spatial assessment of the erosion-accumulation budget of sediments and a quantitative assessment of sediment yield.The processes of accumulations of material down the slope are largely determined by the sediment connectivity of the territory.Several indicators are now used to determine the sediment connectivity: sediment delivery ratio (SDR) [23][24][25]; index of connectivity (I.C.) [26][27][28]; travel time [29,30]; transport capacity [14,31].
One of the most commonly used approaches is the use of the transport capacity within the Water and Tillage Erosion Model and Sediment Delivery Model (WaTEM/SEDEM) model [2], due to the small amount of data needed for calculations and the high quality of results obtained.M. Sheng and H. Fang [32] have researched the progress in the Wa-TEM/SEDEM model and its application prospect in the studies on sediment transport.The WaTEM/SEDEM has been developed at the Physical and Regional Geography Research Group, KU Leuven University, Belgium.It is a spatially distributed soil erosion and sediment delivery model.Compared to other more sophisticated dynamic models, this model requires minimal data input and the model structure is simple.The WaTEM/SEDEM has data requirements almost similar to the RUSLE model and it can assess both water and tillage erosion; moreover, it can spatially model the soil erosion and sediment deposition rates, as well as the soil redistribution patterns.The WaTEM/SEDEM model has been used quite frequently around the world [2].For example, WaTEM/SEDEM model studies were conducted in Spain in 68 river catchments [33], in Italy in 40 river catchments [34], Belgium in 24 river catchments [13], and in central and northern Mongolia [35].
The model is still rarely used within the territory of Russia.It can be noted in the traditional agricultural regions in the south of the European part of Russia, within the Belgorod region [31], and the east Russian Plain [36].This model is rarely used in the predominantly non-agricultural territories of Siberia and the Lena River catchment.
Accordingly, the purpose of this work is to assess the possibility of using the Wa-TEM/SEDEM model within two local catchment areas in the Lena River catchment, with different topography to quantify the sediment yield from the catchment area, and its dynamics over the past few decades due to changes in land use and precipitation intensity.
Study Area
The study areas are located within the plain and mountainous relief.The first mountain site is located within the catchment area of the gauging station in the town of Chara (Figure 1) and has an area of 4150 km 2 .The «Chara» site is located within the Stanovoi upland and has elevations ranging from 700 to 3060 m, with an average elevation of 1449 m (Table 1).The geological structure of the site is mainly represented by pre-Quaternary acid plutonic and metamorphic rocks in the highland part of the site, as well as unconsolidated sediments and mixed sedimentary rocks in the Chara River valley [37].The second plain catchment area near the city of Yakutsk is located between the Tabaginskiy and Kangalassky capes and has an area of 15,740 km 2 .The «Yakutsk» site is located within the Prilenskoye plateau and has elevations ranging from 74 to 423 m, with an average elevation of 223 m (Table 1).The geological structure of the site is mainly represented by siliciclastic sedimentary rocks in the elevated part of the site, as well as alluvium rocks closer to the Lena channel [37].
an average elevation of 223 m (Table 1).The geological structure of the site is mainly represented by siliciclastic sedimentary rocks in the elevated part of the site, as well as alluvium rocks closer to the Lena channel [37].
The «Chara» catchment area is characterized by the values of the water surface runoff equal to 250 mm.The «Yakutsk» catchment area is characterized by the values of the water surface runoff equal to 50 mm [38].The «Chara» catchment area is characterized by the values of the water surface runoff equal to 250 mm.The «Yakutsk» catchment area is characterized by the values of the water surface runoff equal to 50 mm [38].
These territories were chosen because we have data observation on the suspended sediments yield for them, which will allow us to conclude about the correctness of predicted calculations.These catchment areas are characterized by the environmental parameters presented in Table 1.It should be noted that despite the negative average annual temperatures, during May, June, July, August, and September positive temperatures are observed here, which allow the formation of surface runoff by rainfall precipitation.The soil cover is represented by the Lithic Leptosols Humic, Rustic Podzols, Histic Gleysols Dystric in the catchment area of the Chara River and Haplic Cambisoils Eutric, Haplic Cambisoils Distric, Rubic Arenosols Eutric, and Voronic Chernozems Pachic.
Method
The WaTEM/SEDEM model [13,14] was used for the average long-term assessment of the net erosion and sediment yield to the river network.Net erosion and accumulation maps were also created.The WaTEM/SEDEM is based on a raster model of spatial data.The main structural element of the raster model is a pixel or grid cell.The methodology consists of three steps.The first step is to estimate the potential soil loss within each grid pixel based on the revised universal soil loss equation (RUSLE) Equation ( 1) [5].
In the second stage, the transport of eroded material is simulated.Sediment movement is estimated until the river element is reached.The sediment transport is calculated using transport capacity (Equation ( 2)): where TC is the transport capacity (kg m −2 year −1 ), ktc: the transport capacity coefficient (m) depending on the type of land cover, S IR : the interrill slope gradient factor, and the other variables are the same as in Equation ( 1).The model uses two values of ktc: ktchigh for arable land; ktclow for unploughed land.We used the values of the coefficients set by default in the software package when modeling within the studied areas: ktchigh = 250, and ktclow = 75.
The routing algorithm was used to transfer the eroded sediment from the source to the river network at the third.The amount of sediment delivered from the up-slope areas was added to sediment produced by erosion (E) for each pixel.If the sum exceeded the transport capacity (TC) of the flow, then the sediment yield from the pixel was limited to the transport capacity.If the sum of the sediment delivered to a given grid pixel and the sediment formed by erosion in that pixel was lower than the transport capacity of the flow, then all the sediment was transported further down the slope.
The results of the model's work are a spatial model of net erosion; the average longterm mass of sediments load from the catchment area to the river network.The average long-term mass of sediments load obtained by the WaTEM/SEDEM was compared with measured values of suspended sediments at the gauging stations.
Input Data
The following cartographic models were used for calculations of sediment yield and net erosion maps within study areas: relief, soil erodibility; land use; rainfall erosivity factor; model of C-factor.A raster model for representing spatial data was used with grid pixel size (100 × 100 m) in our study.
Relief and LS-Factor
Currently, there are several free available global elevation models (DEM) representing the relief with different resolutions from 1-7.5": SRTM C-SIR, SRTM X-SAR [39,40], ASTER GDEM v.2 [41]; ASTER GDEM v.3; ALOS3D30 [42]; ArcticDEM [43]; GMTED 2010 [44] and others.All the models described above are the result of remote sensing of the earth.Additionally, there is a DEM with middle spatial resolution.This DEM has a spatial resolution of 3" (about 100 m) and is available for download at http://viewfinderpanoramas.org (accessed on 22 September 2022) [45].This model was created by a group of authors based on several data sources: two open-source elevation models, the SRTM C-SIR and ASTER GDEM, as well as topographic maps at a scale of 1:100,000 and 1:200,000.This relief model was used because in the future we plan to conduct similar studies for the entire Lena River catchment (about 2.6 million km 2 ).Using a more detailed spatial resolution for such a vast area is difficult.The Nearing [3] method is used to assess the LS-factor in the WaTEM/SEDEM methodology in our study.
Soil Erodibility
The spatial and attributive data of the Unified State Register of Soil Resources of Russia (USRSR), the data which are presented on the website http://egrpr.esoil.ru/(accessed on 22 September 2022), were used to create a spatial model of erodibility (K-factor).The USRSR was mainly created based on the soil map of scale 1:2,500,000 [46].The soil erosion map was created with Formula 3 and initial data from USRSR.An alternative source of spatial soil data for this area is the Harmonized World Soil Database (HWSD) [47], as well as data from the SoilGrids project [48], which have a lower spatial resolution since they were created for this area based on more generalized soil maps.
where (a: soil organic matter (%), d: the fraction content of particles 0.002-0.1 mm in size (%), e: the fraction content of particles < 0.002 mm in size (%), b: the classes for structure and c: the classes of permeability).A more detailed description of the erodibility calculation is given in the USLE, RUSLE, and RUSLE2 models [4, 5,49].Maps of soil types of the studied areas are presented in Figures 2 and 3.
Land Use
The spatial land use model of the study areas was obtained on the GlobCover2009 land cover model [50] from which forests, meadows, and water bodies were identified.Anthropogenic objects (roads, settlements) and arable lands were recognized by us from the ESA WorldCover model [51], obtained using Sentinel high-resolution images.
The high-resolution satellite images presented in Google Earth for the modern period and images from the KeyHole-4B reconnaissance satellite (CORONA program) for the USSR period have been used to assess land-use dynamics.Eight KeyHole images covering the entire territory for the late 1960s with a spatial resolution of 1.8 m were selected for the "Yakutsk" catchment (Table 2).
It should be noted that data on sediment yield is for the 1966-1985 period, but are KeyHole images for the end of the 60s.However, due to the cropland area dynamics during the Soviet period being very insignificant [52], using the mentioned images for the 1966-1985 period seems acceptable for our aim.The Georeferencing module in ArcGIS was used to geo-reference the KeyHole images (projection WGS 84/UTM 50 Northern Hemisphere).Modern (2018-2021) very highresolution satellite images were used as reference data.These images are available as global base maps from Google and ESRI in the QGIS module: HCMGIS.Crossroads and unchanged objects (buildings, logging sites) were chosen as reference points.The thirdorder polynomial transformation and bilinear interpolation were used.The maximum georeferencing errors for all images were less than 6 pixels [53].
Cultivated cropland was recognized in modern images by several features: orthogonal boundaries, homogeneous tone, texture (furrows from plowing), and the presence of protective forest lines at the field boundaries (Figure 4a).At the same time, the crop field plowed for at least one year in the period 2019-2021 was recognized as cultivated.Abandoned cropland is also quite easily identified by overgrown grass (spotted pattern), shrubs, and trees.Sometimes fields are partially flooded (Figure 4b).Due to the lack of multi-temporal data, croplands for the Soviet period were recognized by the general features listed above, except color, since the KeyHole images are black and white (Figure 4c).
the USSR period have been used to assess land-use dynamics.Eight KeyHole images covering the entire territory for the late 1960s with a spatial resolution of 1.8 m were selected for the "Yakutsk" catchment (Table 2).It should be noted that data on sediment yield is for the 1966-1985 period, but are KeyHole images for the end of the 60s.However, due to the cropland area dynamics during the Soviet period being very insignificant [52], using the mentioned images for the 1966-1985 period seems acceptable for our aim.
The Georeferencing module in ArcGIS was used to geo-reference the KeyHole images (projection WGS 84/UTM 50 Northern Hemisphere).Modern (2018-2021) very high-resolution satellite images were used as reference data.These images are available as global base maps from Google and ESRI in the QGIS module: HCMGIS.Crossroads and unchanged objects (buildings, logging sites) were chosen as reference points.The third-order polynomial transformation and bilinear interpolation were used.The maximum georeferencing errors for all images were less than 6 pixels [53].
Cultivated cropland was recognized in modern images by several features: orthogonal boundaries, homogeneous tone, texture (furrows from plowing), and the presence of protective forest lines at the field boundaries (Figure 4a).At the same time, the crop field plowed for at least one year in the period 2019-2021 was recognized as cultivated.Abandoned cropland is also quite easily identified by overgrown grass (spotted pattern), shrubs, and trees.Sometimes fields are partially flooded (Figure 4b).Due to the lack of multi-temporal data, croplands for the Soviet period were recognized by the general features listed above, except color, since the KeyHole images are black and white (Figure 4c).The modern crop fields were digitized manually and overlaid onto the KeyHole images.Boundaries were corrected, and the remaining fields were digitized.In some cases, crop fields look pretty similar to logging sites; therefore, modern images were used as auxiliary data.If an unclear area is an abandoned field in the modern image, it was identified as a cultivated field for the Soviet period.Thus, vector layers of cropland for The modern crop fields were digitized manually and overlaid onto the KeyHole images.Boundaries were corrected, and the remaining fields were digitized.In some cases, crop fields look pretty similar to logging sites; therefore, modern images were used as auxiliary data.If an unclear area is an abandoned field in the modern image, it was identified as a cultivated field for the Soviet period.Thus, vector layers of cropland for two periods were obtained (Figure 5).The modern crop fields were digitized manually and overlaid onto the Key images.Boundaries were corrected, and the remaining fields were digitized.In cases, crop fields look pretty similar to logging sites; therefore, modern images were as auxiliary data.If an unclear area is an abandoned field in the modern image, i identified as a cultivated field for the Soviet period.Thus, vector layers of croplan two periods were obtained (Figure 5).The cropland area was calculated, amounting to 6718.5 ha in 1969 and 3337.2 2021, i.e., 0.42% and 0.21% of the total catchment area.It should be noted that w general decrease in area, in some places the plowing of new sites is observed.Despi almost twofold reduction in the area of arable land, its insignificant share of the catchment area allows us to conclude that its contribution to the suspended sedi The cropland area was calculated, amounting to 6718.5 ha in 1969 and 3337.2 ha in 2021, i.e., 0.42% and 0.21% of the total catchment area.It should be noted that with a general decrease in area, in some places the plowing of new sites is observed.Despite the almost twofold reduction in the area of arable land, its insignificant share of the total catchment area allows us to conclude that its contribution to the suspended sediment yield formation is none.
(c)
There are no arable lands at all, but there is mining from a quarry within the «Chara» area.The impact of quarries on the sediment yield formation proved impossible to assess due to the lack of good coverage of the Chara River catchment with high-resolution images.It is very difficult to recognize quarries from lower-resolution images due to their small size.However, the share of the area occupied by quarries is still an order lower than by cropland.Therefore, their contribution seems insignificant to us.Therefore, we can use one land use model for the two considered periods (1966-1985; 1986-2019) obtained based on the GlobCover2009 land cover model [50] and ESA WorldCover model [51].
The WaTEM/SEDEM methodology requires not only a spatial model of land use but also a spatial model of the C-factor.Based on the created land use models and C-factor value proposed by P. Panagos [54] and L.F. Litvin [55], the C-factor spatial models were also created for the study areas seen in Table 3.
Precipitation
The spatial model of the rainfall erosivity factor obtained in the study [56] was used as a basis to perform this work.The initial data on the intensity of rainfall for the period from 1961-1984 was used for the territory of Russia.Over the past few decades, starting from 1985-1990, an intensification of climate change was noted by many authors [57,58], expressed in a change in the amount of precipitation and the intensity of the precipitation.Unfortunately, there is no modern data on the intensity of precipitation in the study area; therefore, we analyzed the change in the amount of precipitation within the studied catchment area.Further, based on the obtained changes, the used model [56] was corrected for the time interval 1986-2019.
Analysis of data from the Russian Research Institute of Hydrometeorological Information: World Data Center website, meteo.ru,shows a slight increase in the average long-term annual precipitation in the catchment area of the Chara River from 357 mm (average for 1966-1985) to 400 mm (average for 1986-2019).The increase in average annual precipitation is due to an increase in rainfall precipitation from 287 mm to 337 mm (an increase of 15%).The snow precipitation decreased from 59 to 46 mm per year.
The analysis of changes in the amount of precipitation within the "Yakutsk" catchment area shows a much smaller change in the above time intervals, both for average annual and rainfall precipitation.The average long-term precipitation for the same time intervals is 241 mm and 240 mm, and the amount of rainfall precipitation is 162 and 163 mm.The amount of snow precipitation here is 79 mm and 77 mm.It can be stated that there is almost no change in the amount of precipitation in this area.
Results
It was found that for the period from 1966-1985 the total average annual mass of sediments loaded in the river network from the catchment area of the Chara River is predicted at 616,000 tons (Table 4).The specific sediment yield here is 149 t/km 2 per year.A 15% increase in precipitation between the 1966-1985 and 1986-2019 time intervals should result in an increase in sediment load to 717,000 t/year into the river network or a specific sediment yield of 172 t/km 2 per year.An analysis of the predicted sediment yield from the catchment area to the rivers within the "Yakutsk" catchment area indicates 50,073 tons per year or a specific sediment yield of 3.5 t/km 2 per year.Sediment yield from the "Yakutsk" catchment area has not changed from 1966-1985 to the 1986-2019 time interval, according to predicted data.
A comparative analysis was carried out of predicted data of sediment yield from the catchment area to rivers with data measured at gauging stations of the national monitoring network of Russia.The predicted data are very different from the sediment yield measured at the gauging station of the Chara River according to our study.The predicted values show an increase in sediment yield values.The sediment yield of the Chara River from the 1966-1985 to 1986-2019 periods decreased from 28 to 15 t/km 2 per year, while maintaining constant water discharge according to D.V. Magritsky and L.S. Banshchikov [59].Comparing these data with the simulation data in Table 4, we see that not only do the values differ, but also the direction of their change.
There can be few explanations for the contradictions.Firstly, this can be explained by the fact that those transport capacity coefficients that are set in the WaTEM/SEDEM model by default need to be calibrated when working within the mountain catchment area we are considering.However, in this study, this cannot be done due to the lack of data necessary for calibration.
Secondly, the sediment yield from the territory of the mountain catchment area can be implemented due to a larger proportion of large particles that accumulate in the channel and do not reach the measurement station, and those sediments that reach are transported in the form of bed loads and are not taken into account at the measurement station.
Thirdly, lakes Leprindo and Leprindokan, located in the riverbed, can act as large traps for sediments that can trap a significant part of the sediment.
A comparative analysis of the data of simulation of sediment yield from the "Yakutsk" catchment area shows good agreement with the measured data at the Tabaga gauging station.The sediment yield from river catchment areas in this area is 3.5 t/km 2 per year and, according to our estimates, has not changed over the past 34 years.The sediment yield measured in the river is 7.08 t/km 2 according to data from 1966-1985 and 9.45 t/km 2 per year according to data measured in the interval 1986-2019 [59].
Net Erosion Maps Analysis
The maps of net erosion, which represent the erosion-accumulation budget of the studied areas, are an additional result of this work.Such maps were created for the period 1985-2019.The entire territory of the studied part of the catchment area of the Chara River is characterized by soil erosion losses of 1.72 t/ha per year in the period from 1986-2019.The gross soil erosion is about 17 t/ha per year.Soil erosion occurs within 96% of the Chara catchment area according to predicted data.Large erosion values in the catchment are generally located on steep slopes in the upper reaches of the Chara tributaries.The accumulation of part of the eroded material occurs in a small part of the study area (4%) at the foot of steep slopes, in river valleys within the dry valley, and is characterized by high rates, on average, up to −390 t/ha per year (Figure 6).The "Yakutsk" study area is characterized by very small values of soil erosion loss, averaging at 0.035 t/ha per year in the period from 1985-2019.This value of erosion is typical for the whole catchment area and was obtained from the accumulation of part of the eroded material within the catchment area.The gross soil erosion is about 0.1 t/ha per year.Soil erosion occurs within 98% of the area of this catchment area according to predicted data.High values of erosion in the catchment area most often correspond to the steep left slope of the Lena River.In addition, relatively high values of soil erosion losses are typical for the right banks of the tributaries of the Lena River (Figure 7).The accumulation of part of the eroded material occurs in a small part of the study area (2%) and is characterized by an average value of −2.5 t/ha per year.
Discussion
The "Yakutsk" catchment area has repeatedly become the subject of a study on the assessment of gross erosion as part of larger-scale work.Therefore, we compared the values of gross erosion obtained by us and the results of previous studies (Table 5).
Discussion
The "Yakutsk" catchment area has repeatedly become the subject of a study on the assessment of gross erosion as part of larger-scale work.Therefore, we compared the values of gross erosion obtained by us and the results of previous studies (Table 5).
Analysis of Table 4 shows that different results were obtained, and some estimates differ by more than three times.These differences can be explained by a few reasons.These differences can be explained by the scale of the study.For example, this study was performed using a raster model with a grid pixel size of 100 m, and the study by authors [21] was performed using a grid pixel size of 250 m.The model proposed by Moscow State University [61] is used instead of the RUSLE model in the same study [21].The differences between the estimates obtained in this study and the study of S.R. Chalov [60] can be explained by the differences associated with the use of initial data on soil erodibility Thus, S.R. Chalov's study uses the base to calculate the erodibility by HWSD v 1.2 (Harmonized World Soil Database, created by FAO, Rome, Italy and IIASA, Laxenburg, Austria) [62].For example, the entire Yakutsk site has the same erodibility value using HWSD, whereas the use of the initial data of the USRSR shows that several types of soils prevail within the Yakutsk site, which is very different in terms of erodibility due to different soil organic matter content and different granulometric composition.
Studies obtaining net erosion maps of the annual erosion-accumulation budget of deposits are rare.For example, the USPED model allows erosion and accumulation maps for Washington State (USA) [63].However, only qualitative analysis was conducted in this study, and there are no quantitative estimates of the sediment budget.Six categories of land characterized by erosion/accumulation were identified: three categories of erosion and three categories of accumulation.
The quantitative spatial model of the erosion-accumulation budget has been created within 68 river catchments in Spain using the WaTEM/SEDEM model [33,64].For example, the WaTEM/SEDEM was used for net erosion map creation within the catchment of the Taibilla reservoir with a mountainous relief given in a study [33].The erosion intensity within the Taibilla basin is equal to a maximum of 20 t/ha per year, which is comparable to the rates for the Chara River basin (17 t/ha per year).At the same time, the intensity of accumulation in these two basins is quite different.
A net water erosion map using the WaTEM/SEDEM model was constructed in Mongolia in analyzing the contribution of gold mines to the sediment yield of the Tuul River [35].Although the model WaTEM/SEDEM allows us to obtain quantitative values of erosion/accumulation, the authors of the study [35] do not give them but present only five categories: deposition, low erosion, moderate erosion, high erosion, and very high erosion.
Studies using the model WaTEM/SEDEM were carried out within China, in the Shuangfengtan catchment [32].The study also assessed the feasibility of using the Wa-TEM/SEDEM model to predict sediment yield from the catchment area.A quantitative map of the erosion/accumulation budget was obtained.The erosion values are a maximum of 80 t/ha per year, which is much more than the erosion values obtained by us.
Conclusions
The analysis was made of the possibility of assessing the sediment yield and its dynamics using the WaTEM/SEDEM model within two catchment areas located in the Lena River catchment and differing in relief conditions in this study.The analysis was performed based on a comparison of simulation data and observed data.
It was found the simulated sediment yield significantly exceeds (172 t/km 2 per year from 1986-2019) the observed values of suspended sediment yield (15 t/km 2 per year) at the gauging station within the mountain catchment area of the Chara River.The predicted data have inverse temporal dynamics compared to the measured sediment yield at the gauging station.The simulation results within the Yakutsk catchment area, which is located within the plain territory, are better consistent with the measurement data at gauging stations, both in absolute values and in their dynamics over the past few decades.The model sediment yield from the study area has not changed and is 3.5 t/km 2 , while the suspended sediment yield in the Lena at the Tabaga post slightly increased from 7 t/km 2 to 9.45 t/km 2 per year from 1966-1985 to 1986-2019.
An analysis of the obtained maps shows that more than 96% of the considered catchment areas are subject to erosion processes, and the accumulation processes happen in less than 4% of the area.An analysis of the values of gross and net erosion in the considered catchment areas shows that most of the eroded material remains within the catchment areas.
Figure 4 .
Figure 4. Modern cultivated cropland on the high-resolution images (a), Abandoned crop fields (b), and cultivated cropland in 1969 (c).
Figure 4 .
Figure 4. Modern cultivated cropland on the high-resolution images (a), Abandoned crop fields (b), and cultivated cropland in 1969 (c).
Figure 4 .
Figure 4. Modern cultivated cropland on the high-resolution images (a), Abandoned crop field and cultivated cropland in 1969 (c).
Water 2022 , 18 Figure 6 .
Figure 6.A spatial model of net water erosion in the catchment of the river Chara (1986-2019 period).
Figure 6 .
Figure 6.A spatial model of net water erosion in the catchment of the river Chara (1986-2019 period).
Figure 7 .
Figure 7.A spatial model of net water erosion of the «Yakutsk» catchment area (1986-2019 period).
Figure 7 .
Figure 7.A spatial model of net water erosion of the «Yakutsk» catchment area (1986-2019 period).
Table 1 .
The main natural characteristics of the studied areas.
Table 3 .
Land use/land cover and their C-factor value.
Table 4 .
Predicted values of sediment yield in rivers from the territory of the study areas. | 2022-10-01T15:05:51.359Z | 2022-09-28T00:00:00.000 | {
"year": 2022,
"sha1": "6a41d60380472075c9bd95921a86220d6dc11f22",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-4441/14/19/3055/pdf?version=1664369107",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "70f4ac5c2190c0e1feae69bde0715003b86266b5",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": []
} |
126078083 | pes2o/s2orc | v3-fos-license | Mixed displacement–rotation–pressure formulations for linear elasticity
We propose a new locking-free family of mixed finite element and finite volume element methods for the approximation of linear elastostatics, formulated in terms of displacement, rotation vector, and pressure. The unique solvability of the three-field continuous formulation, as well as the well-definiteness and stability of the proposed Galerkin and Petrov–Galerkin methods, is established thanks to the Babuška–Brezzi theory. Optimal a priori error estimates are derived using norms robust with respect to the Lamé constants, turning these numerical methods particularly appealing for nearly incompressible materials. We exemplify the accuracy (in a suitably weighted norm), as well the applicability of the new formulation and the mixed schemes by conducting a number of computational tests in 2D and 3D, also including cases not covered by our theoretical analysis. c ⃝ 2018 The Author(s). Published by Elsevier B.V. This is an open access article under the CC BY license (http://creativecommons. org/licenses/by/4.0/). MSC: 65N30; 65N12; 76D07; 65N15
Introduction
The numerical solution of elasticity-based problems encompasses well-documented difficulties. For instance, for pure-displacement formulations, the use of classical finite element discretisations based on piecewise linear and continuous elements, ensures accuracy only for moderate values of the Poisson ratio ν. As ν → 0.5, that is, when the Lamé dilation modulus λ → ∞, and the elastic material becomes nearly incompressible, the numerical scheme might generate spurious solutions (unphysically small deformations related to the well-known locking phenomenon, see for instance [1]). A number of appropriate formulations together with their associated numerical methods are available to overcome this issue. Notably, choosing a mixed scheme would produce accurate solutions even for nearly incompressible materials, and at the same time, one accommodates the direct approximation of auxiliary variables of interest such as pressure, stress, or rotations.
One of the most common mixed approaches for linear elasticity is the Hu-Washizu formulation [2,3]. Some popular methods based on such formulation include the enhanced assumed strain method [4], the assumed stress method [5], the mixed-enhanced strain method [6], the strain gap method [7], and the B-bar scheme [1]. Some of these methods actually coincide under certain conditions (see the discussions in e.g. [8][9][10]). The well-posedness for this class of formulations has been established in [11], where it is also shown that a modified version of the Hu-Washizu formulation is more amenable for obtaining uniform convergence in the incompressibility limit. Alternatively, other mixed approaches (such as the Hellinger-Reissner principle) can be employed to obtain robust methods with respect to the Lamé constants.
Schemes more closely related to the present contribution, state the problem using stresses and rotations. We mention for instance mixed formulations based on stress [12][13][14], the augmented scheme in [15], a family of pseudostress-based methods from [16], displacement-pressure mixed formulations [17]; and the first-order least squares presented in [18]. More recent least squares schemes in connection with the present context include saddle-point least squares methods [19], mixed approaches also considering anisotropy, large strains and quasiincompressibility, or others applied specifically to plates [20,21]. We refer as well to other locking-free methods for plate models [22], and to the membrane elements introduced in [23], also including the rotation tensor as an additional field.
In contrast to the brief literature survey given above, here we advocate a novel formulation of the elasticity equations in terms of displacement, rotation vector, and pressure (similar ideas in the context of vorticity-based formulations for Stokes and Brinkman equations can be found e.g. in [24][25][26][27]). This three-field formulation has a resemblance with the displacement, pressure, and vorticity momentum formulations for acoustic fluid-structure interaction studied in [28]. However in that reference, the system is solved for the fluid displacement and the vorticity momentum arises as the Lagrange multiplier imposing an irrotationality constraint.
In our case, after regarding the pressure together with the rotation vector as a single auxiliary unknown (defined in an appropriate product functional space), we are able to analyse the solvability of the resulting mixed variational formulation simply appealing to the classical Babuška-Brezzi theory for saddle-point problems. Thanks to a rescaling of the rotation vector norm, the well-posedness result and the continuous dependence on the data turn out to be independent of the Lamé constants. This analysis is valid for (possibly non-homogeneous) Dirichlet boundary conditions, and bounded Lipschitz domains.
Concerning numerical approximation, we first introduce a family of finite elements given by piecewise continuous polynomials of degree k ≥ 1 for the displacement, and piecewise polynomials of degree k − 1 for the rotation and pressure. The unique solvability of the finite element scheme is then established using analogous techniques as in the continuous case, that is, exploiting a weighted norm. In addition, we prove optimal a priori error estimates with constants fully independent of the Lamé coefficient λ; guaranteeing robustness of the method also in the nearly incompressible limit. Nevertheless in the case of full incompressibility, both continuous and discrete problems are not necessarily well-defined.
We remark that in the two-dimensional case, the computational cost of the proposed FE method in its lowest order configuration is 2|N h | (where N h denotes the set of vertices in the mesh and |N h | its cardinality), which is lower than, for instance, the MINI element for displacement-pressure formulations (accounting for 7|N h | local degrees of freedom). Furthermore, even if our method involves two additional unknowns (pressure and rotation vector), these can be statically condensed at the implementation stage without incurring in additional computational cost, thus turning the proposed discretisation into a very competitive method.
A further goal in this contribution is to construct a finite volume element (FVE) method specifically tailored for elasticity equations. FVE schemes correspond to Petrov-Galerkin formulations where the trial space is constructed using a primal partition of the domain, whereas the test space is associated with either a dual mesh or a dual basis. Depending on the particular kind of dual grid, the transfer operator between trial and test spaces possesses different interpolation properties which are used in recasting a preliminary pure finite volume formulation into a Petrov-Galerkin one. In general, these methods enjoy some features shared by finite element and finite volume schemes, including local flux conservation properties, liberty to choose different numerical fluxes and dual partitions associated to unstructured primal meshes; and several others (see for example [29]). Discretisation schemes following this principle have been systematically employed in numerous fluid flow problems, including Stokes, Navier-Stokes (see e.g. [30][31][32][33][34]) and also in coupled flow-transport systems arising from diverse applications (see [35][36][37]). However, up to our knowledge, the only contributions addressing FVE-like discretisations for solid mechanics are the hybridstress finite volume method for linear elasticity on quads studied in [38]; and [39], where two alternative stabilisation approaches based on nodal pressure and dual bases and meshes are applied to construct inf-sup stable approximations for nearly incompressible linear elasticity. The class of finite volume element methods we introduce here is based on the lowest-order mixed finite element method discussed above. As in well-established FVE schemes for Stokes equations (cf. [31,32]), it turns out that the two schemes differ only by the assembly of the forcing term, and therefore straightforward derivation of stability properties and energy estimates in natural norms can be done exploiting the results obtained for the family of mixed finite elements. In addition, the FVE scheme features mass conservativity on the dual control volumes, suitability for irregular domains and unstructured partitions, and robust approximations of displacements. We also observe that the proposed schemes perform very well across the scope of regimes considered in our numerical simulations.
Outline. We have structured the contents of the manuscript in the following manner. To simplify the exposition, a few recurrent notations and useful identities are recalled in the remainder of this section. In Section 2 we lay out the precise form of the linear elasticity equations that we will focus on, we derive a suitable mixed weak formulation, and provide its solvability analysis. A Galerkin method is introduced in Section 3, where we also obtain stability properties and a priori error estimates. Section 4 concentrates on the development of a low-order FVE scheme, and its accuracy is studied in connection with the properties of the FE method. We briefly discuss up to which extent the definition and construction of the proposed FE and FVE schemes needs to be modified in order to accommodate the study of mixed displacement-traction boundary conditions. Some indications on how the analysis could be extended are also addressed. Finally, the convergence and robustness of the proposed methods is illustrated via a set of insightful computational tests collected in Section 5, including also some comparisons against other methods.
we recall the following notation for differential operators: We also recall a version of Green's formula given in e.g. [40,Theorem 2.11]: ∫ 1) and the following useful identity 2. The model problem
Derivation of a displacement-rotation-pressure formulation
We assume that an isotropic and linearly elastic solid occupies a polyhedral bounded domain Ω of R d , with Lipschitz boundary ∂Ω . Determining the deformation of a linearly elastic body subject to a volume load and with given boundary conditions, and adopting the hypothesis of small strains, results in the classical linear elasticity problem, formulated as follows. Given an external forcef and a prescribed boundary motion g, we seek the displacements u such that where ε(u) = 1 2 (∇u + ∇u t ) is the infinitesimal strain tensor, I denotes the d × d-identity matrix, and µ, λ are the Lamé coefficients (material properties of the solid, and here assumed constant).
Next, and following the seminal work [12], one notices that using the identity div(ε(u)) = and dividing the momentum equation by λ + µ, we can rewrite (2.1) in the form of the well-known Cauchy-Navier (or Navier-Lamé) equations where the body load has been rescaled as f = 1 λ+µf . We then proceed to define the auxiliary scaling parameter η := µ λ+µ > 0, and recast (2.2) in a displacement-pressure formulation (considering p = −div u as the solid pressure) as follows At this point, and with the aim of deriving formulations whose stability holds independently of the Lamé coefficient λ, we introduce the field of rescaled rotations ω := √ η curl u, as an additional unknown in the problem. Exploiting (1.2) and the definition of pressure in terms of displacements, we observe that (2.3) is fully equivalent to the following set of governing equations, in their pure-Dirichlet case. Find the displacement u, the rotation ω and the pressure p such that (see [18]): (2.7) The theoretical analysis will be restricted to the case of clamped boundaries, g = 0. The case of non-homogeneous Dirichlet boundary conditions can be analysed in an analogous manner after introducing a suitable displacement lifting. On the other hand, the incorporation of mixed (displacement-traction) boundary conditions will be addressed in Sections 3.2 and 4.2.
Weak form of the governing equations
Let us introduce the functional spaces H := H 1 0 (Ω ) d , Z := L 2 (Ω ) d , and Q := L 2 (Ω ), where Z and Q are endowed with their natural norms, and we recall the definition of the norm in the product space Z × Q as ∥(θ , q)∥ 2 Z×Q := ∥θ ∥ 2 0,Ω + ∥q∥ 2 0,Ω . On the other hand, for H we consider the following η-dependent scaled norm (see for instance, [40, Remark 2.7]): ∥v∥ 2 H := η∥curl v∥ 2 0,Ω + ∥div v∥ 2 0,Ω , which is motivated by the natural energy form of the momentum conservation equation in (2.3), and the rescaled rotation. Notice that the stability of continuous and discrete problems will be stated in terms of this norm.
We proceed to test (2.6) against adequate functions, to integrate by parts in two terms, and to take into account the boundary conditions (2.7) in such a way that the resulting mixed variational formulation reads as follows. Find Introducing the bilinear forms a : for all v ∈ H, ω, θ ∈ Z, and p, q ∈ Q; we realise that the variational problem above can be recast as: Find Remark 1. Note that the natural regularity for displacements in (2.8)-(2.9) is actually H 0 (curl, Ω )∩H 0 (div, Ω ), where H 0 (curl, Ω ) := {v ∈ H(curl, Ω ) : (v × n)| ∂Ω = 0} and H 0 (div, Ω ) := {v ∈ H(div, Ω ) : (v · n)| ∂Ω = 0}. According to [40,Lemma 2.5], an algebraic and topological equivalence between this space and H = H 1 0 (Ω ) d holds under quite general assumptions on the domain: Ω only needs to be bounded and ∂Ω Lipschitz-continuous (see also [40,Remark 2.7]). In other instances (for example in the analysis of vector Laplacians, see e.g. [41, Section 2.3.2]) if tangential and normal components of the displacement are to be fixed on different parts of the boundary, then the domain convexity is also required. However that is not the case in the present study.
Remark 2. In the incompressibility limit ν = 0.5, the problem defined in (2.8)-(2.9) reduces to ∫ However, it is not difficult to see that u satisfies ∇(div u) = −f in Ω , which is not well-posed in the space H. Moreover, note that after rescaling, the body force also goes to zero in the incompressibility limit.
Well-posedness
The unique solvability of problem (2.8)-(2.9), together with the continuous dependence on the data will be established using the well-known Babuška-Brezzi theory.
We first observe that the bilinear forms a(·, ·), b(·, ·) and the linear functional F(·) are all bounded by positive constants independent of η (and therefore independent of the Lamé coefficient λ). In fact, since 0 < η < 1, it is easy to check that In addition, the bilinear form a(·, ·) is (Z × Q)-elliptic, uniformly with respect to the scaling parameter η, as stated in the following result.
Moreover, an inf-sup condition holds for the bilinear form b(·, ·). Lemma 2.2. There exists C > 0, independent of η, such that Proof. Let us consider a generic v ∈ H and definẽ We immediately notice that and from the definition of b(·, ·), we readily obtain which finishes the proof. □ We are now in a position to state the solvability of the continuous problem (2.8)-(2.9).
Theorem 2.1. There exists a unique solution ( (ω, p), u ) ∈ (Z × Q) × H to problem (2.8)-(2.9), which satisfies the following continuous dependence on the data Proof. By virtue of the general theory for saddle-point problems (see e.g. [42]), the desired result follows from a direct application of Lemmas 2.1 and 2.2. □ Owing to the well-known regularity for the elasticity equations (see e.g. [43], [44,Theorem 5.2]), the solution u of (2.8)-(2.9) belongs to H 1+s (Ω ) n , for some s > 0 depending on the geometry of Ω and on the Lamé coefficients (and consequently on η). Moreover, there exists C > 0 independent of f such that
Finite element discretisation
In this section, we introduce a Galerkin scheme associated to (2.8)-(2.9), we specify the finite dimensional subspaces to employ, and analyse the well-posedness of the resulting methods using suitable assumptions on the discrete spaces. The section also contains a derivation of error estimates.
Formulation, solvability, and error bounds
Given an integer k ≥ 1 and a set S ⊂ R d , the space of polynomial functions defined in S and having total degree ≤ k will be denoted by P k (S).
Next, we define the following discrete spaces: which are subspaces of H, Z and Q, respectively; and proceed to state a Galerkin scheme for the continuous variational Our next goal is to establish discrete counterparts of Lemmas 2.1 and 2.2, leading to the solvability and stability of the Galerkin method (3.1)-(3.2). Their proofs are obtained using the same arguments exploited in the continuous case. For completeness we provide the essential steps of the latter result.
Proof. For a generic v h ∈ H h , let us definẽ Then we readily notice that , and so, from the definition of the bilinear form b(·, ·), we arrive at the desired bound We can now state the unique solvability, stability, and convergence properties of the discrete problem (3.1)-(3.2), formulated in form of the three following theorems.
Moreover, there exists a constant C > 0, independent of h and η, such that In addition, the following approximation property is satisfied where ((ω, p), u) ∈ (Z × Q) × H is the unique solution of (2.8)-(2.9).
where s > 0 is such that the bound (2.12) is satisfied, and k ≥ 1 denotes the polynomial degree.
Proof. The result follows from Theorem 3.1 and the standard error estimates for the Scott-Zhang interpolant of u and the vectorial and scalar L 2 -orthogonal projections for ω and p, respectively; together with the additional regularity (2.12). □ To close this section, we observe that the convergence of the displacement approximation can be also measured in the L 2 (Ω ) d -norm, thanks to a classical duality strategy.
be the solutions of the continuous and discrete problems (2.8)-(2.9) and (3.1)-(3.2), respectively. Then, there exist constantss ∈ (0, 1], (depending on Ω and on η), and C > 0 (independent of h and η), such that Proof. Resorting to a duality argument, we first consider the following well-posed problem: Note that, as a consequence of (2.12), the unique solution of (3.3)-(3.4) features additional regularity. More precisely, we can assert that there exists ∈ (0, 1] as in (2.12), andC > 0 (independent of η), such that Next, and thanks to (3.4), we observe that for all where we have also employed (2.8) and (3.1). We then proceed to bound the second term in the right-hand side of (3.6). This is carried out by adding and subtracting (ξ , φ), and applying (3.3) where in the last step we have also used (2.9) and (3.2), valid for all z h ∈ H h . Hence, from (3.6)-(3.7), we can deduce that is the L 2 -orthogonal projection, and we can choose z h as the Scott-Zhang interpolation of z onto the piecewise linear and continuous vector fields. Thus, the proof follows from standard error estimates, the additional regularity (3.5), and Theorem 3.2. □ We point out that the values ∈ (0, 1] is associated to the regularity invoked in (3.5) when the data is
A discrete formulation with mixed displacement-traction boundary conditions
Let us now consider the case where a given displacement g is imposed only on a part of the boundary Γ D ⊂ ∂Ω , and set a given tractiont on the remainder of the boundary, say Γ N = ∂Ω \ Γ D . In this case (2.7) is replaced by where n denotes the outward unit normal on Γ N . This traction condition can be conveniently recast in terms of the field variables in the following way where the rescaled traction is t = 1 λ+µt , and where we have used the well-known identity ε(u)n = (∇u)n− 1 2 curl u×n.
The form of (3.9) readily implies that the displacement should now belong to the space M := H Γ D (curl, Ω )∩H Γ D (div, Ω ). It has been proved in [25, Lemma 1] (restricted to the 2D case) that there exists δ ∈ (1/2, 1] such that M is continuously imbedded in H δ (Ω ) 2 . If we set again homogeneous data on the Dirichlet boundary, we could then, as a first attempt, propose the following discrete modification of (3.1)-(3.2) incorporating mixed displacement-traction boundary conditions: [25]), and the diagonal bilinear form c : Since this bilinear form is non-symmetric and not necessarily semi-positive definite, the analysis of (3.10)-(3.11) does not fall in the same framework as (3.1)-(3.2). A possible way-around would be to define a fixed-point iteration scheme that assumes c(·, ·) as part of the linear functional. Then the solvability analysis can be carried out following e.g. [46]. Alternatively, one could introduce suitable Lagrange multipliers in order to deal with the boundary terms. Further investigation is necessary in this regard, and we simply mention that the implementation and numerical verification of test cases involving (3.10)-(3.12) will be addressed in Section 5, where we observe optimal convergence.
Formulation and main properties
In addition to the mesh T h (from now on, the primal mesh), we introduce another partition of Ω , denoted by T ⋆ h and referred to as the dual mesh, where for each element K ∈ T h we create segments joining its barycentre b K with the midpoints (2D barycentres) m F of each face F ⊂ ∂ K (or the midpoints of each edge, in 2D), forming four polyhedra (or three quadrilaterals, in the 2D case) Q z for z in the set of vertices of K , that is, z ∈ N h ∩ K . Then to each vertex s j ∈ N h , we associate a so-called control volume K ⋆ j consisting of the union of the polyhedra (quadrilaterals in 2D) Q s j sharing the vertex s j . A sketch of the resulting control volume associated to s j is depicted in Fig. 4.1(a).
In its lowest-order version, a FVE method for the approximation of (2.8)-(2.9) can be constructed by associating discrete spaces to a dual partition of the domain and notice that no additional space is introduced for the finite volume approximation of ω or p. Furthermore, we define the T ⋆ h -piecewise lumping map H h : H h → H ⋆ h which relates the primal and conforming dual meshes by for all v h ∈ H h , where χ j is the vectorial characteristic function of the control volume K ⋆ j and {ϕ j } j is the canonical FE basis of H h (cf. [32]). For any v ∈ H, this operator satisfies the interpolation bound (see e.g. [29]) ∥v − H h v∥ 0,Ω ≤ Ch|v| 1,Ω . In addition, since for the type of domains we are considering we can write H := H 1 0 (Ω ) d = H 0 (curl; Ω ) ∩ H 0 (div; Ω ), then [40,Remark 2.7] implies that the operator H h (·) also satisfies which plays a role in the convergence proof for the envisioned FVE method. The discrete FVE formulation is obtained by multiplying (2.4) by v ⋆ h ∈ H ⋆ h and integrating by parts over each K ⋆ j ∈ T ⋆ h , multiplying (2.5) by θ h ∈ Z h and integrating by parts over each K ∈ T h , and multiplying (2.6) by (1 + η)q h , for q h ∈ Q h , and integrating by parts over each K ∈ T h . This, along with identity (1.1), results in a Petrov-Galerkin formulation that reads as follows: Find where the bilinear formb : We also introduce the bilinear form B : which will be used to show that the Petrov-Galerkin formulation (4.2)-(4.3) can be regarded as a Galerkin method. We proceed to establish a relationship between the bilinear forms b(·, ·) and B(·, ·), which will be useful to carry out the error analysis in a finite-element-fashion. For the sake of brevity, only the proof for the two-dimensional case is provided. The proof for the three-dimensional case follows in an analogous manner, where we instead consider polyhedral control volumes and boundary surfaces rather than boundary edges.
Proof. First, let g be a function that is continuous on the interior of each quadrilateral Q j (as shown in Fig. 4.1(c)) with ∫ e g = 0 for any boundary edge e. Using Fig. 4.1(c), it is straightforward to show that the following relation holds: where m j+1 b K m j denotes the union of the line segments m j+1 b K and b K m j . We take m j+3 = m j in the case that the index is out of bound. Next, from the definition of the transfer operator H h (·), we find that In order to arrive at (4.4), we use the definition of B ( ·, · ) in combination with integration by parts and the fact that both q h and v h (s j ) are constant in the interior of each quadrilateral Q j , to obtain ] .
Since q h and θ h are constant on the edges of each element K ∈ T h , we can write Then, after one application of integration by parts and identity (1.1), we can assert that which completes the proof. □ is bounded uniformly with respect to η.
A FVE method with displacement-traction boundary conditions
Analogously to Section 3.2, we discuss here how our FVE scheme can be modified to incorporate mixed boundary conditions. We first define the spacẽ And then test (2.4) against v ⋆ h ∈H ⋆ h and integrate by parts, which leads to Next we reason as in the proof of Lemma 4.1 by considering the edges that coincide with the boundary segment Γ N separately. More precisely, by substituting H h v h and by definition of the traction t, we readily obtain for every v h ∈Ĥ h and every edge e of each K ∈ T h (cf. [32]), and the fact that p h ∈ Q h and ω h ∈ Z h are constant on each element K ∈ T h , which implies that
5)
where we have also used that the union of boundary edges of control volumes and the union of boundary edges of elements coincide. Consequently, we can combine (4.5)-(4.6) with (1.1) to finally obtain the following FVE formulation using mixed displacement-traction boundary conditions: where the newly introduced bilinear form C :Ĥ h ×Ĥ h → R is defined as Moreover, the linearity of u h ∈Ĥ h on each element K ∈ T h implies that This relation states that, also for mixed boundary conditions, the lowest-order FE and FVE schemes only differ by assembly of the right-hand side; which is not necessarily true for all nonsymmetric formulations.
Stability and convergence analysis
Back to the homogeneous Dirichlet case, our next goal is to prove a FVE-counterpart of Lemma 3.2, leading to the solvability and stability of (4.2)-(4.3). Recall that Lemma 3.1 establishes that the bilinear form a ( ·, · ) is (Z h × Q h )elliptic, uniformly with respect to η. Lemmas 4.1 and 3.2 readily imply that B(·, ·) satisfies an inf-sup condition, as stated in the following result.
Analogously to the previous section, the following two theorems formulate the unique solvability, stability, best approximation, and convergence properties of the discrete problem (4.2)-(4.3). Moreover, there exists a constant C > 0, independent of h and η, such that In addition, the following best approximation result is satisfied where ((ω, p), u) ∈ (Z × Q) × H is the unique solution to (2.8)-(2.9).
The next lemma establishes linear convergence of the lowest-order FVE method. (4.9) Next, after applying the inf-sup condition from Lemma 4.2 to Eq. (4.8), standard arguments imply that there exists a constant C 0 > 0, independent of h and η, such that Moreover, combining (4.8) with (4.9), and using (4.1) together with Lemma 3.1 implies that there exists a constant C 1 > 0, independent of h and η, such that Applying the triangle inequality to the convergence bound for the FE method established in Theorem 3.2 in combination with the inequalities (4.10) and (4.11) finishes the proof. □ To close this section, we prove an L 2 -estimate for the displacement error. For this purpose we first state a preliminary result (cf. [32]) that involves the transfer operator H h (·).
Lemma 4.3. For any function z h ∈ H h and any element K ∈ T h , one has
such that employing identity (3.7) and identity (4.12) yields which holds for all z h ∈ H h . In particular, we take the Lagrange interpolant of z, denoted by z I ∈ H h . Moreover, we use f K to denote the average of f on a given K ∈ T h . Then, by virtue of Lemma 4.3 and after integrating over K ∈ T h instead of over control volumes K ⋆ j ∈ T ⋆ h , we find that for some constant C 0 > 0, independent of h and η, Applying triangle inequality, using the estimates for the Lagrange interpolants, and exploiting the additional regularity, we get (4.14) and we arrive at the desired result after taking the L 2 -projections for ξ and φ, using interpolation properties, and employing (4.13) in combination with (4.14). □
Numerical tests
We report in this section some numerical examples which confirm our theoretical results, also including some additional cases not covered by our analysis.
Test 1A (accuracy assessment in 2D). For our first computational example we conduct a convergence test using a sequence of successively refined uniform partitions of the elastic domain Ω = (0, 1) 2 . We arbitrarily choose the Lamé parameters µ = 50, λ = 5000, so that η = 0.0099. This example focuses on the pure-Dirichlet problem (2.4)-(2.7), where we propose the following closed-form solutions satisfying the homogeneous Dirichlet datum, and where the forcing term f is constructed using these smooth functions and the linear momentum equation. The convergence study is performed for the FVE method (4.2)-(4.3) (of lowest order), and for the Galerkin schemes (3.1)-(3.2) of order k = 1 and k = 2. For a generic scalar or vectorial field v, on each nested mesh we will denote computed errors and experimental convergence rates as where e,ê stand for errors generated by methods defined on meshes with meshsizes h,ĥ, respectively; and we recall that ∥ · ∥ H denotes the η-dependent norm. These errors are tabulated by number of degrees of freedom in Table 5.1, which correspond here (and in all subsequent tests) to the dimension of the space Z h × Q h × H h . Apart from the displacement error measured in the L 2 -norm (whose error decays with order h k+1 as anticipated by Theorems 3.3 and 4.3), each individual error exhibits an O(h k ) rate of convergence, as predicted by the a priori error estimates stated in Theorems 3.2 and 4.2. Moreover, the errors produced by the first two methods practically coincide, which is explained by the fact that they only differ in the RHS assembly. For reference, in the top row of Fig. 5.1 we depict approximate solutions generated with the lowest-order FVE scheme.
Test 1B (accuracy in a 3D non-convex domain). We now consider again the pure Dirichlet case, now using a non-homogeneous datum set using the following closed-form displacement defined on an L-shaped domain of width 2, height 1.5 and depth 0.5: sin(π x) sin(π y) cos(π z) −x cos(π x) cos(π y) sin(π z) x y sin(π x) cos(π y) cos(π z) ⎞ ⎠ , which is also used to compute the exact rotation, pressure, and body load. The model constants are taken as in Test 1A, and we generate a sequence of refined meshes (non nested and unstructured) and produce a convergence study reported in Table 5 Test 1C (robustness with respect to η). In addition, these methods are robust with respect to the model parameters, which we confirm by a series of tests where we fix a Young modulus E = 10000, we vary the Poisson ratio ν, and measure the errors produced by the first order finite element method on an unstructured mesh of 33282 elements Table 5.3). Furthermore, we also construct a different smooth forcing term f = 100(cos(x), cos(y)) t , independent of the model parameters, solve the discrete problem for relatively large Lamé constants (we recall that λ = Eν/[(1 + ν)(1 − 2ν)] and µ = E/(2 + 2ν)), and tabulate in the bottom block of Table 5.3 the obtained norms of the approximate solutions. We evidence stable and robust computations even in the nearly incompressibility limit. We have also arbitrarily set the scaling parameter to a very low value η =1e-15 (even if materials with so large differences between the shear and dilation moduli are rarely encountered), and have reproduced Test 1A (experimental convergence against a manufactured solution), observing that the methods still produce optimal convergence rates. We stress that these rates are optimal when measured in the H-norm. The computational cost associated with solving the linear systems arising from the FE and FVE discretisations can be significantly reduced through static condensation of the pressure and rotation blocks. The relevant systems assume Test 1A (a, b, c). Approximate solutions computed with a FE scheme of order k = 1 for Test 1B (d, e, f).
with σ h := ( p h , ω h ) t . Since A is symmetric and positive definite, σ h can be eliminated from the first equation of (5.1) using σ h = −A −1 B t u h (recall that A is formed by the pressure and rotation mass matrices, so it is block-diagonal and could be easily inverted). Substituting this equation back into the second equation of (5.1) yields the displacement Schur complement system which is smaller, symmetric, and positive definite. Different methods can be employed to solve the Schur complement problem efficiently, also avoiding assembling S := BA −1 B t (see e.g. [47] for an application in elasticity).
Test 2 (2D beam bending). For the next computational example we study the displacement-rotation-pressure patterns of a rectangular beam (with length L = 10 and height l = 2) subjected to a couple (that is, a prescribed traction ( f (1 − y), 0) t , with f = 200) at one end, as shown in Fig. 5.2(a). We assume that the origin O is fully fixed and that the horizontal displacement is zero along the left edge of the domain Ω . Furthermore, on the remainder of the boundary we consider zero normal stresses incorporated through the bilinear form c(·, ·) (see (3.12)) and we set up a zero body force f = 0. The availability of an exact solution (cf. [48]) makes that this problem is frequently used as a benchmark. In Fig. 5.2 we illustrate the components of the displacement, the rotation and the pressure computed on a mesh consisting of 5120 triangular elements using the mixed FE method corresponding to k = 2, where the rectangular beam we consider has the following material properties: Young's modulus E = 1500, Poisson's ratio ν = 0.49, Lamé constants λ = 24664.4 and µ = 503.356, such that the model parameter equals η = 0.02. In addition, we conduct several tests for the lowest-order mixed FE and FVE methods on different mesh resolutions and report on the error with respect to the analytic solution (5.3). In addition, although this is in general not true, we mention that the second order FE scheme ensures extremely rapid convergence (explained by the regularity of the true solution (5.3)). For ν = 0.4999, optimal convergence is recovered for finer meshes.
We also perform a series of tests for the lowest-order FE method using different Lamé constants and model parameters in order to test the performance of the methods when approaching the incompressibility limit, where we fix a Young's modulus E = 1500, vary the Poisson ratio ν, and use a mesh consisting of 100000 triangular elements and using 301201 D.o.f. Based on the comparisons in Table 5.4, we observe that the performance is barely modified for large values of λ. Test 3 (3D beam bending). We also consider a three-dimensional beam problem. The beam occupies the domain Ω = (0, ℓ) × (0, w) × (0, w), with ℓ = 2.5, w = 0.5 (see a sketch in Fig. 5.4(a)); and its elastic properties are characterised by a Young modulus of E = 1000 and a Poisson ratio ν = 0.3, giving Lamé constants λ = 576.923, µ = 384.615, and the coefficient η = 0.4. The body force acts in the direction of gravityf = (0, 0, −ρg) t and it is specified by g = 9.8 and ρ = 0.2. Zero displacements are enforced on the face x = 0, whereas on the remainder of the boundary we consider zero normal stresses incorporated through the term ∫ x>0 2η(∇u − div u I)n · v defining the bilinear form c(·, ·) (see (3.12)). In Fig. 5.4 we illustrate (on the deformed configuration) the displacement, rotation vector, and pressure computed on a mesh of 45221 tetrahedral elements, employing a method of order k = 2. In the case of gravity-induced deflection, the Euler-Bernoulli beam theory predicts a maximum vertical deflection of δ = ρg Aℓ/(8E I ), occurring at the free end of the body, A = w 2 is the area of the cross-section, and I = A 4 /12 is the planar inertial moment. Table 5.5 compares the expected deflection with the vertical displacement measured on the midpoint of the face located at x = ℓ, for different discretisation choices. We also tabulate the norms of the approximate solutions generated with the lowest-order FE method on successively refined meshes (see Table 5.6).
Test 4 (Cook's membrane benchmark). We finalise the set of tests by considering a two-dimensional quadrilateral panel with domain Ω defined as the convex hull of the set {(0, 0), (ℓ, w), (ℓ, ℓ + s), (0, w)}, with ℓ = 48, w = 44, s = 16, and proceed to study its elastic response dominated by bending and shear. This benchmark is known as Cook's membrane problem (cf. [49]). The panel is clamped at the left edge (x = 0) and the body is subjected to a shearing distributed loadt = (0, 1/s) t on the opposite end (at x = ℓ and giving a resulting load of magnitude 1, Fig. 5.5(a)). This effect is incorporated in the formulation through the term − ∫ x=ℓ t · v ds added to the functional F(·) in the modified weak formulation (3.10)-(3.11). A traction-free condition is applied on the non-vertical boundaries (imposed as in the previous test, using (3.12)), and we set up a zero volume force f = 0 (so that the weight of the membrane is not considered). The elastic plate has Young's modulus E = 1, Poisson ratio ν = 1/3, Lamé constants λ = 3/4, µ = 3/8, giving a scaling constant of η = 1/3. Fig. 5.5 portrays the displacement, rotation and pressure fields on the deformed domain (without amplification of the deformation field). We also conduct several tests for different mesh resolutions and report on the vertical displacement (deflection) measured at the midpoint of the right end of the domain, (x 0 , y 0 ) = (ℓ, ℓ + s/2). The test results are shown in panel (b) of the figure, where the convergence behaviour of the deflection is observed as a function of the number of points discretising the right edge of the membrane. In the absence of a known closed form solution for this problem, we also include a referential value reported in the literature (according to [50][51][52], under plane stress conditions the maximum vertical displacement at this point should be around 23.92). To conclude we perform again the Cook's membrane test, but focusing on the nearly incompressibility limit. We choose the model parameters E = 250, ν = 0.4999, λ = 416611, µ = 83.3389, and η = 0.0002. As reference value for the maximum deflection at the point (x 0 , y 0 ) we consider 7.505 (see [4,53]), and conduct a convergence analysis portrayed in Fig. 5.5(c) (see also Table 5.7, where we display all individual norms for the numerical approximations via FE schemes of different orders). This time the vertical displacement is plotted against the D.o.f. associated to the underlying discretisation, where we also include a comparison against numerical results obtained with other finite element formulations applied to the original equations (2.1) (a classical pure-displacement formulation discretised with piecewise continuous elements of degree k, the Taylor-Hood finite element for a displacement-pressure formulation, the MINI-element [54], and a stabilised interior-penalty DG method [55]).
These schemes have comparable complexity (but we do not include other mixed methods based on stress or pseudostress formulations, as their associated cost would be much higher). A further comparison between these methods is presented focusing now on their computational cost and measured in terms of CPU time. To do so, we consider the simple test case of a square domain with clamped boundaries where the structured primal mesh has 10000 vertices. We set E = 10000, ν = 0.33 and solve the elastostatics problem using different methods whose performance is shown in Table 5.8. The tabulated results display measured wall CPU time comprising matrix assembly, factorisation, and solution. As direct solvers might not be preferable for large systems and 3D problems, we also include the wall time for the matrix inversion using a Krylov solver, the bi-conjugate gradient stabilised method (BiCGStab) preconditioned with an incomplete Cholesky factorisation. The results indicate that the proposed mixed and FVE methods are preferable. As mentioned above, the matrix system associated with the Schur complement formulation (5.2) is substantially smaller than in the other methods. Unfortunately, a drawback of this formulation over the standard implementation of our mixed FE and FVE schemes (3.1)-(3.2) and (4.2)-(4.3) is that the assembly of the blocks and computation of the action of the inverses (that we do not carry out in an optimal manner) consume most of the CPU time.
Test 5 (mixed boundary conditions in a 2D non-convex domain). For our last test, we investigate numerically the accuracy of the formulations proposed in Sections 3.2 and 4.2. Apart from setting displacement-traction boundary conditions, we again define the problem on a non-convex domain (the unit square from Test 1A now has a hole of Table 5.9 for the lowest-order methods, where we have set the bottom, right, top and left walls of the domain as the displacement boundary and the non-homogeneous traction condition is imposed on the inner circle. Coarse mesh solutions for displacements are exemplified in Fig. 5.6. The FE scheme exhibits a similar behaviour to the one observed in Test 1B: suboptimal displacement convergence in the L 2 -norm, again due to the non-convexity of the domain, while the remaining errors behave as in the pure Dirichlet case. While we only confirm this behaviour numerically, these computations stand as a motivation to investigate further the theoretical properties of the formulations in the case of mixed boundary conditions. | 2019-04-22T13:12:46.569Z | 2019-02-01T00:00:00.000 | {
"year": 2019,
"sha1": "9c047d114c7700e3673f398eed0d683e2cebf023",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.cma.2018.09.029",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "069cb39c0ff6235e69823e0337d4270043664f3f",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
3767973 | pes2o/s2orc | v3-fos-license | A Framework for Cloud-Based Healthcare Services to Monitor Noncommunicable Diseases Patient
Monitoring patients who have noncommunicable diseases is a big challenge. These illnesses require a continuous monitoring that leads to high cost for patients’ healthcare. Several solutions proposed reducing the impact of these diseases in terms of economic with respect to quality of services. One of the best solutions is mobile healthcare, where patients do not need to be hospitalized under supervision of caregivers. This paper presents a new hybrid framework based on mobile multimedia cloud that is scalable and efficient and provides cost-effective monitoring solution for noncommunicable disease patient. In order to validate the effectiveness of the framework, we also propose a novel evaluation model based on Analytical Hierarchy Process (AHP), which incorporates some criteria from multiple decision makers in the context of healthcare monitoring applications. Using the proposed evaluation model, we analyzed three possible frameworks (proposed hybrid framework, mobile, and multimedia frameworks) in terms of their applicability in the real healthcare environment.
Introduction
Noncommunicable disease is one of the most severe causes for the dramatic increase in the number of dead around the world [1,2].Noncommunicable diseases which are also called chronic diseases such as heart disease, stroke, cancer, chronic respiratory diseases, and diabetes, all of which constitute concern not only for developing countries but also even to the first world countries, especially in economic terms where it cost huge amounts of money for patients' healthcare.
In EU, the number of deaths due to deadly diseases (such as CVD) is accounting for over two millions deaths each year [2].Moreover, in Southeast Asia, the expenses of Singapore's healthcare providers increase in one year, 2006, by 8.3% compared to 2005.As a report by WHO and WEF as in [1], it said that China alone will have a loss in productivity amounting to staggering 558US$ billion by 2015 due to the impact of these diseases [2].See Table 1 for other countries.
Many of the solutions are put in place to reduce the impact of these diseases.For example, they tried to take care of patients who have heart disease through programs that are mostly done in hospitals and health centers under supervision of medical staff [3][4][5][6][7].Patients, due to limited time or long distance from treatment centers, have led to poor participation of these programs.With the purpose of addressing the aforementioned apprehension, there is an increasing need for continuous monitoring of patient's health.Mobile healthcare is one of the most important solutions that enable constant monitoring and help reduce the expenditures [8].
Mobile healthcare (m-health) provides the bilateral solution: the empowering of the individual monitoring of chronic care and the effective cost of health care services at all economic levels, as the proceedings of the m-health summit at the WEF confirmed [9].M-health systems can be used for diverse, unobtrusive monitoring types; the chronic diseases monitoring is one of them [5].Moreover, m-health technologies offer real-time monitoring and detection of changes in health status, support the adoption and maintenance of a healthy lifestyle, provide rapid diagnosis of health conditions, and facilitate the implementation of interventions ranging from promoting patient self-care to providing remote healthcare services [10].
A big challenge is how to enable constant monitoring while the patient is out of the hospital/clinic and practicing his daily living activities at/in his own environment.In addition, how can we facilitate low-cost small devices for continuous 24 hours a day and any-place health, mental, and activity status monitoring.We can summarize our contribution in the following.
(i) We proposed a solution that utilized the benefits of cloud computing, mobile health applications, wireless body sensors, and media healthcare services in one hybrid framework.Users anywhere and from any device, at any time, could access their services and applications.Our framework can facilitate effective processing of complicated multimedia services and applications from anywhere, at any time, and on any device.Multimedia services and applications enable doctors and other healthcare professionals to have fast access to m-health information for effective decision making along with well cure.Patients multimedia information (such as video, images, and text) that are monitored will be sent through mobile devices (such as smart phone and wearable sensor) to the cloud in order to be processed and formed to be ready in use by another party (i.e., doctors, therapists, caregivers, or other patients) to make a decision or to share this information.(ii) We proposed a novel evaluation model based on Analytical Hierarchy Process called Cost-effective, Health support, Operational and Functionality (CHOF).This model helps evaluating any system/framework/tool that monitors patients with a chronic illness.This model consists of four main criteria and fifteen subcriteria.(iii) We analyzed three possible frameworks (proposed hybrid framework, mobile, and multimedia frameworks) in terms of their applicability in the real healthcare environment.
The remainder of this paper is organized as follows.Section 2 describes related work; then Section 3 presents our proposed model in details.Section 4 shows the evaluation process, which represents analysis and results, and finally concluding remarks are made in Section 5.
Literature Review
There has been a lot of research on mobile healthcare and remote monitoring in the past decade.They emphasized that constant patient monitoring has led to 50% drop in hospitalizations, 73% reduction in emergency room visits, and 51% reduction in patient cost.Also, it provides decreased cost, higher revenues, and ability to take on higher caseloads for caregiver [11].
Since our framework consists of three parts (the multimedia services in healthcare, cloud computing for healthcare, and mobile healthcare), this section will try to study the literature from three points of view.
Multimedia Services in Healthcare.
Multimedia is a complex collection of elements changing overtime such as motion graphics animated type, 3D generated elements, video, and sound, combined all together and distributed in a sort of interactive mechanism.Nowadays, multimedia services in mhealth are getting a great deal of attention since medicine is a very visual field; the structure and function of the human body, how the diseases are caused, how body react to disease, how a particular drug works, and how a procedure should be done can best be explained through media.Media files can be streamed seamlessly through internet, intranet, and related technologies [12].
Many researches prove that multimedia play an important role in e-health services, especially in educational software.Multimedia improves the way of designing and implementing the educational materials to make it easy for learners to understand information in an interactive environment.It would be a far more effective educational method to use an image or an animation along with the textual description of a biomedical image or the textual description of a clinical action [13].Authors in [14] eliminated the need the 20-minute trips previously required to manually transport radiology images, test results, and other medical information.Besides, wireless web cameras installed at the remote site allowed medical staff in the field to run real-time video consultations and patient reviews with their colleagues in the hospital.They demonstrate the transmission of medical data over a standard network, with no effort to tailor the characteristics of the transmission system to the specificity of the transmitted data.In [15] authors exploit personalized healthcare information for elderly to learn and improve their healthcare knowledge.The provision of multilingual video clips of stroke-precaution knowledge could help those elderly conquer the reading comprehension as well as the illiterate problems.
Cloud Computing in Healthcare.
The term cloud computing refers to computer resources that are available on demand through which computing infrastructure, applications, and business processes can be delivered to users as a service wherever and whenever they need.With the advent of cloud computing, the long-held dream of computing as a utility has become true [16,17].We can equate cloud computing to the source of electricity and gas so customers are only charged based on the usage of the provided services and resources.Everything has rolled up in predictable monthly subscription; thus one only pays for what he uses [17,18].With respect to the m-health domain, many previous studies identified the future of cloud computing and offered various frameworks to enhance healthcare service [18][19][20][21].
In [22] authors proposed a cloud-based system to computerize the approach of gathering patients' crucial information through a system of sensors linked with legacy medical devices and to convey the information to the cloud of the medical center for storage, manipulating, and delivery.The major prosperities of the system are that it gives clients 7-daya-week continuous information gathering, wipes out manual work, and is free of errors.
In [23] authors described a cloud computing protocol management system that provides multimedia sensor signal processing and security as a service to mobile devices.The system has relieved mobile devices from executing heavier multimedia and security algorithms in delivering mobile health services.This will improve the utilization of the ubiquitous mobile device for societal services and promote health service delivery to marginalized rural communities.
Authors in [24] identified a pervasive cloud initiative called Dhatri, which leveraged the power of cloud computing and wireless technologies to facilitate physicians to access patient health information at anytime from anywhere.In [25] authors described a cloud-based prototype emergency medical system that can be accessed by Android-enabled mobile devices.They integrate the emergency system with personal health record systems to provide physicians with easy and immediate access to patient data from anywhere and via almost any computing device while containing costs.
Mobile Healthcare.
In the recent two decades, there has been a relentless decrease in the amount of patients getting treated in doctor's facilities because of the impact of mobile healthcare.Long-term monitoring of patients' physical, cognitive, behavioral process is vitally important for those with chronic diseases.WHO defined m-health as "medical and public health practice supported by mobile devices, such as mobile phones, patient monitoring devices, personal digital assistants, and other wireless devices" [26].An overview of health monitoring architecture with a smart phone was introduced in [27] which has links to external wireless sensor devices, such as a blood pressure monitor and weight scale, to collect periodically health data.These external devices have sensors, which are Bluetooth-enabled.The smart phones running healthcare apps monitor the wellbeing of the patient and transmit the data to the healthcare data server maintained by the hospital via the internet [3].
A mobile embedded with sensors using different information about the monitored person like position and temperature, to do a precise analysis in a cloud [5].The medical personnel access the data server via a secure internet connection to monitor the patient's health remotely.Some of existing systems are mostly stand-alone and are not yet integrated with existing electronic health systems, which could critically limit their large scale employment [1,2,9].
Authors in [28] designed and developed a new Home Monitoring System with full functions, small device, low power consumption, low cost, and easiness to use based on distributed database storage.Only few systems can deal with multiple devices.All the examined systems connect to a remote server to store the patient data; none supported distributed data storage.Many m-healthcare systems have some limitation (closed systems, does not provide clinical data integration services that may come from different sources when it requires more than a professional).
In [29] authors tried to propose a new m-healthcare system based on SOA called SOAMOH that enables the integration of the clinical data, supporting HL7, and helps people to attain healthcare service whenever and wherever they are, using their mobile devices that are connected to wireless networks.However, they did not implement or test this system.
In [4] they made a system to measure and record heart rate of patients using software tools on a mobile phone platform.This allows patients to do a home-based cardiac rehabilitation exercise program using a mobile application called TuneWalk.Two of the authors had participated as test subjects in the evaluation.TuneWalk recorded the heart rate variability and activity data that was measured by WBA.TuneWalk's MET estimation with the test subjects was found to be fairly accurate at walking speeds.However, when the subjects ran the results become unreliable.This should not present a major problem, since in the proposed home-based CR program exercises subjects are required to walk and not run.The authors in [6] introduced a mobile healthcarebased heart failure monitoring architecture.The framework consists of various sensors to measure physical quantities; processors to process those quantities and smartphones acting as a hub, to deliver these data to appropriate users or caregivers.
Authors of [8] proposed a predicative and preventive device that is capable of predicting heart rate abnormality, and possibly tachycardia, in advance by using an advanced prediction model to estimate the heart rates of selected International Journal of Distributed Sensor Networks patients.This is done by sending alerts to the medical professionals for appropriate action to be taken when the estimated heart rates exceed a certain threshold.
In [9] they proposed architecture to integrate the data of personal health monitoring systems within an electronic health care network using extending the traditional Service Oriented Architecture (SOA) approach to business-tobusiness networks with support for processing the complex event and realizing the context.
In [30] the developers have proposed wearable blood pressure sensors which monitors patients' health and allow effective treatment of hypertension at home or in out-ofhospital environments.
Author in [10] introduces a novel architecture called HERA project which aims to build an AAL system to provide low-cost services to improve the quality of life for the elderly that suffer from the early stages of the Alzheimer disease and/or other diseases (i.e., diabetes or cardiovascular problems).The HERA system developers have a complete methodology and architecture and have planned the evaluation process.However, they are yet to implement their plans.Therefore, there are currently no results to see whether the system actually works.Some of the researches discussed this issue from the hardware view.
Work in [28] presents a handoff protocol that can be readily implemented by Wireless Body Area Sensor Networks (WBASNs) coordinators and APs when the RSS of the former falls below acceptable levels.For this, they promote employing multiple radio channels in order to leverage the system's capacity, which allows monitoring multiple users in a deployment setting with several rooms.They tried to build a reliable and efficient health monitoring application based on WBASN and using their protocol that enables continuous monitoring of ambulatory patients at home.The processing in [28] was conducted on an offline data collected via Bluetooth from a small size wearable electrocardiograph (ECG) device since processing an online and real-time data is difficult [21,31].This issue was addressed in [31] using a software framework for body sensor network (BSN) called SPINE (signal processing in-node environment).This software enables emulation of a set of nodes forming a WBSN and requires a data set for each node.
Different healthcare systems are introduced in the literature [3-20, 22-27, 30, 32-37].Despite considerable progress in health monitoring research over the last decade, today's health-monitoring systems are not fully capable of monitoring noncommunicable diseases patients while doing their daily living activities.In most of the systems, a patient or a user is restricted to his/her room or home or to an area within the range of installed wireless body sensor network [11-27, 31, 33-37].If the user goes outside this range, monitoring device fails to relay recorded information to the gateway device and, as a result, some information is lost.In addition, many old systems use PCs or some other hardware as a gateway platform, which again limits the mobility.Moreover, an important service less discussed in most of the related studies is patient's ability to view his/her medical data trends anywhere and anytime with minimum additional hardware requirements.
Proposed Work
The proposed work presented in this paper consists of two dimensions: firstly, to unify the aforementioned concepts in one framework, as a solution of real-time monitoring for noncommunicable diseases patient; secondly, an evaluation model that helps appraise the proposed framework and other various alternatives of similar solutions and frameworks.
3.1.Cloud-Based Multimedia Framework.The proposed framework is shown in Figure 1; it depicts the system architecture of cloud-based multimedia framework healthcare.The media cloud computing physically separates the user interface from the media application logic.The user device (e.g., smart phone, laptop, IP camera/webcam connected to PC, etc.) executes only a viewer component (e.g., a web browser or mobile application) operating as a remote display for the m-health media services and applications running on distant servers in the cloud.
The multimedia cloud providers deploy powerful cloud virtual machine resources such as the CPU, memory, GPU, and network bandwidth on demand, while utilities first process and manage heterogeneous m-health multimedia requests and then deliver computing results or m-health media data to the users.By employing a multimedia cloud service, mobile users do not need to pay for costly computing devices.Instead, they can pay for the utilized resources based on time.
As shown in Figure 1, the users (e.g., patients, doctors, caregivers, etc.) can obtain different m-health media services from cloud media server.We do not need a traditional media streaming server for progressive download or for HTTPbased adaptive streaming technologies such as Apple HTTP Live Streaming (HLS) or Microsoft Smooth Streaming.We actually need a media server to stream live or on-demand media and to run a streaming server.Streaming servers are necessary if the client wants to protect their streams with encryption, deliver data via peer-to-peer or multicast, or serve multiple targets.
There are three basic options: the Adobe Flash Media Server line of products, Microsoft's IIS Media Services, and Wowza Media Server.There is also an open-source streaming server called Red5 that uses some secure protocols for data streaming to Flash, but it does not currently convert streams for delivery to iOS or support any adaptive streaming technology.The cloud media server will introduce many media services, such as media storage service, transcoding service, analysis and sharing service, and streaming service, and request for their different compositions through a web browser or mobile application interface.The users' composite service requirements are then sent to the cloud system manager, which finds out the suitable configuration of VM resources that are based on SLA.
The resource allocation manager then allocates the VM resources to a set of physical machines to run the mobile media service tasks.The mobile media service tasks outputs (i.e., display updates, composition results, etc.) are finally transmitted to the user through the web browser or mobile application.After the media applications or services are started, the system monitoring and metering function tracks the VM resource usages that are attributed to users.It can also notify the resource and system managers for a quick response and the resource configuration to assure that the correct VM resources are distributed to suitable mobile users.Therefore, in order to correctly allocate resources and to deploy VM images, an efficient, cost-effective, and optimal VM resource allocation algorithm is necessary for a resource manager.
Evaluation Model.
The proposed framework in previous section provides three different possible implementations: mobile, multimedia, and hybrid solutions.
(1) Mobile systems are those systems which use the smart phone healthcare applications (apps) to monitor healthcare either by using standalone applications or by applications connecting to wearable sensors and working as a gateway of body sensor network (BSN) [30].This solution may have data limited to text and set of biosensors signals.
(2) Multimedia systems in m-health monitoring are those systems which provide services contains elements changing overtime such as motion graphics animated type, 3D generated elements, video, and sound.Many tools such as IP and web cameras and applications such as Voice over IP (VoIP) and streaming audio and video are used in multimedia health services by using wireless architecture [14].(3) The hybrid systems are those which benefit from the two other systems together, mobile and multimedia in a unique system.
Here we will study the three solutions to find the best one among them to be selected.We will compare the three systems based on different criteria and based on Analytical Hierarchy Process (AHP) [38].The first step is the preparation of an evaluation task which considers user needs, assumptions, and constraints relevant to the three solutions.The second step is to identify and select the evaluation criteria, with respect to these solutions.In this respect, we build four main criteria and fifteen subcriteria, as illustrated in Figure 2. The description of each criterion is shown in Table 4.
We used multicriteria decision making (MCDM) techniques.Number of MCDM techniques can be applied to our selection problem; however, here we used AHP.Moreover a number of additional features working in favor of selecting AHP for the selection process have also been considered.AHP is appropriate technique to use, when limited number of alternatives needs to be evaluated.
Analytical Hierarchy Process (AHP). The Analytical Hierarchy Process (AHP) is an MCDM technique, developed by
Saaty [38].AHP decomposes a complex MCDM problem into a hierarchy.It generally consists of selecting a goal, listing criteria, listing sub-criteria, determining the alternatives, building hierarchy, assigning priorities, calculating weights, checking consistency, getting results, and making final decision.AHP technique has been implemented according to the following steps.
(1) Modeling the Problem.AHP decomposes a complex MCDM problem into a hierarchy model as shown in Figure 3, with the goal (evaluating and selecting suitable solutions for cloud-based framework) at the top, the alternatives (Mobile, Multimedia, Hybrid) at the bottom, and the criteria (Costeffective, Health support, Operational and functionality) and subcriteria (cost-provider-side, cost-consumer-side, information type, real time, coverage, etc.) in the middle.
(2) Applying Pair-Wise Comparison between Children of Each Level of the Hierarchy.This comparison generates a matrix of relative rankings for each level.This matrix is also called If attribute has one of the above numbers assigned to it when compared with attribute , then has the value 1/number assigned to it when compared with .More formally if = , then = 1/."judgment matrix." The matrix satisfies the relation = 1/ as follows: The order of the matrix is dependent on the number of elements at its connected lower level.The pair-wise comparison is conducted based on Saaty scale described in Table 2 [38].
(3) Computing Eigenvector.Once pair-wise comparison is performed, eigenvectors are computed.The eigenvector is computed by dividing each element of the matrix by the sum of its column elements.It may be mentioned that the eigenvectors represent the relative weights among the alternatives (mobile, multimedia, and hybrid).
(4) Computing Consistency Index (CI).Consistency index of matrix order (the size of the matrix) is computed using the following, where max is the largest eigenvalue of matrix order : (5) Computing Consistency Ratio (CR).Consistency ratio is computed using (3).The consistency ratio compares the consistency index with the random consistency index (RI).As shown in Table 3, RI is generated from a sample size of 500 matrices: If the value of consistency ratio is smaller than or equal to 10%, the inconsistency is acceptable.For = 3, the threshold is set to 0.05 and, for = 4, it is set to 0.08.For ≥ 5, if the consistency ratio (CR) is greater than 10%, the judgment needs to be revised [38].
(6) Computing Final Ranking.The final ranking is calculated by using the following: where is the number of criteria and is the number of alternatives.The alternative with the highest priority value is considered as the most suitable solution for a decision problem of selection, while the alternative with the lowest priority value is the least appropriate for the given decision problem.
Analysis and Results
We attempted to apply AHP technique selecting one of the three solutions that we investigate in this research.We described the analysis through AHP.As shown in Figure 3, the hierarchy model for the criteria of selecting cloud-based solutions consists of 4 levels.Level 0 is the goal of the problem, "selecting the best solution." Level 1 consists of 4 main criteria such as Cost-effective, Health support, Operational and functionality.Level 2 contains the subcriteria and the last level is the alternative solutions.The solutions are evaluated according to a reference model developed by Olla, Phillip, and Joseph Tan [36] in order to extract requirements, and then we used questionnaires to measure the relative importance of each criterion.
Figure 4 shows the ranking of cloud-based proposed solutions based on each subcriterion.As can be seen from the figure, the relative importance of each subcriterion in the solutions is shown by dots positions.For example, we notice As shown in Table 5, the final ranking of the alternatives are as follows.Hybrid solution (50.4%) is the most preferable cloud-based framework for monitoring noncommunicable diseases, followed by the cloud-based mobile framework, with a priority vector of (29.5%).The cloud-based multimedia framework gets the third place with priority vector of (20%).The table also shows the relative importance of the main criteria and the subcriteria.The health support criterion is the most important one, which has the weight of 41.8%.The less important is the cost-effective factor (12%).In terms of subcriteria, we deal with global weights.In this regard, the most important criterion is the real-time (RT) with global weight equal to (22.1%) followed by coverage monitoring CM subcriteria (13.9%) and device type DT (9%).The last criterion is the scalability with relative importance equal to 1.5%.
Figure 5 shows the ranking of the cloud-based solutions based on the main criteria.The priority vector for each tool is represented by the cyan bar.As can be noticed from the figure, the hybrid solution is the most suitable framework since it has the highest priority vector.The figure also shows the strength and weakness of each solution, which are based on the four main criteria.For example, the strength of the hybrid solution lies in the health support factor, as hybrid We use ExpertChoice to calculate the final priority and check the consistency.Figure 6 shows the consistency ratio (CR) for the matrix of relative ranking at alternatives level, with respect to fifteen criteria.As the consistency ratios for the criteria and alternative judgment matrices are less than 6 percent, the response related to relative importance for each criterion from the participants is consistent.As can be seen from Table 6 and Figure 6, the hybrid solution is the appropriate one to select for building a cloud-based healthcare services framework to monitor noncommunicable diseases patient, as it gets highest ranking.
Conclusions
For the period of the research and from the technological point of view, we realize that hybrid (mobile and multimedia) healthcare and cloud computing are such a natural fit for the monitoring patients with chronic diseases.The great benefit of using cloud computing to introduce media services is observable and need to be applied.We discussed some key concepts about what have been developed in the domain of healthcare monitoring using cloud computing and media services.In this paper, our primary focus is, firstly, to develop an architectural framework facilitate the process of getting healthcare services over cloud platform, efficiently and, secondly, to develop a new evaluation model to appraise and select the best solution depending on one of the most famous multicriteria decision making methods (MCDM), the so-called Analytical Hierarchy Process (AHP).
We analyze our proposed model in terms of three possible frameworks and find by results that the best alternative is hybrid framework which consists of both mobile and multimedia.The evaluation results attained from the MCDM technique showed the most appropriate framework that can be used for building healthcare monitoring system.It has been observed that AHP has potential method to solve the selection problem.Our plan in the future is as follows: (i) implementing a prototype of cloud-centric healthcare system application; (ii) evaluating the performance of the proposed framework in terms of QoS guarantee and cost effectiveness.
Figure 1 :
Figure 1: The architecture of the proposed model.
Figure 2 :
Figure 2: Evaluation criteria regarding the three solutions.
framework for healthcare monitoring
Figure 3 :
Figure 3: Hierarchy model for selecting the best solution.
Table 1 :
[1] loss of national income of different selected countries in billions[1].
Table 3 :
Random index values for matrices of different orders.
Table 4 :
The description of the criteria of CHOF model.
Table 5 :
Final ranking of alternatives with respect to the main and sub-criteria.
Table 6 :
Overall ranking comparison AHP. got the highest rank in the health support criterion which is 25.2 while this solution got the lowest rank in the cost-effective criterion which is 2.7. solution | 2018-04-03T00:56:45.888Z | 2015-03-01T00:00:00.000 | {
"year": 2015,
"sha1": "b51bdcf1404f10b8445631473461c1eec1d3907f",
"oa_license": "CCBY",
"oa_url": "https://journals.sagepub.com/doi/pdf/10.1155/2015/985629",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "b51bdcf1404f10b8445631473461c1eec1d3907f",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
111918597 | pes2o/s2orc | v3-fos-license | Modelling dynamic compaction of porous materials with the overstress approach
To model compaction of a porous material we need 1) an equation of state of the porous material in terms of the equation of state of its matrix, and 2) a compaction law. For an equation of state it is common to use Herrmann's suggestion, as in his Pα model. For a compaction law it is common to use a quasi-static compaction relation obtained from 1) a meso-scale model (as in Carroll and Holt's spherical shell model), or from 2) quasi-static tests. Here we are interested in dynamic compaction, like in a planar impact test. In dynamic compaction the state may change too fast for the state point to follow the quasi-static compaction curve. We therefore get an overstress situation. The state point moves out of the quasi-static compaction boundary, and only with time collapses back towards it at a certain rate. In this way the dynamic compaction event becomes rate dependent. In the paper we first write down the rate equations for dynamic compaction according to the overstress approach. We then implement these equations in a hydro-code and run some examples. We show how the overstress rate parameter can be calibrated from tests.
Introduction
To model compaction of a porous material we need: An equation of state (EOS) of the porous material in terms of the EOS of its matrix material. A compaction law. For an EOS we use Herrmann's suggestion as in his P model [1]. For a compaction law it is common to use a quasi-static law, which can be obtained from: A meso-scale model, like Carroll and Holt's spherical shell collapse model [2]. A quasi-static test, as in the P model [1]. We are interested here in dynamic compaction as in a planar impact test. In dynamic situations changes may be too fast for the state point to be able to follow the quasi-static compaction surface, and it may therefore move beyond that surface (overstress). When it does, it tends to fall back towards the quasi-static surface at a rate that increases with overstress. In this way the process of pore collapse, or pore closure, is made rate dependent. We call this the overstress approach, or the overstress concept. In what follows we: Write down the rate equations for dynamic compaction, which we implement in a hydro-code. Run an example of planar impact on a porous stainless steel target. Evaluate the influence of the rate dependence of the compaction process on stress histories down the target.
where E=specific internal energy, P=pressure, V=specific volume, =distension ratio, and =porosity. Herrmann's EOS is usually used for compaction of porous materials, although we've not seen it verified directly or indirectly. Herrmann's EOS is a special case of: (2) Differentiating we get: Using the adiabatic condition: where q is the artificial viscosity, we get: We see that to complete the EOS we need a (P,V) or a V , P relation, which is the compaction law. The first alternative is rate independent (or quasi-static), and the second is a rate dependent compaction law. For Herrmann's EOS the three partial derivatives in equation (3)
Compaction law
Compaction laws are usually defined in the reduced forms (P) or P . The simplest, and quite extensively used, rate independent compaction law is: P qs ()=P c . This means that the material is compacted immediately in response to P>P c . Another rate independent compaction law is derived from Carroll and Holt's quasi-static spherical shell model [2], which is: where Y m is the flow stress of the matrix material. As this curve goes to infinity for zero porosity, we use a correction that makes it finite at zero porosity: where is a small number like 10 -4 . A general rate dependent compaction law by the overstress approach can take the form: where F is an increasing function of its argument. We're using here the simplest form of F, with a single material parameter: where the coefficient A can be calibrated from tests or from a model on the meso-scale.
Simulations
As mentioned above, we implement our compaction model in a hydro-code. We're using the Lagrange processor of the old commercial code PISCES [3]. Equations (5) and (10) make a system of two ODEs to be integrated simultaneously for each computational cell and time step separately. Entering the subroutine that solves the equation of state together with conservation of energy, V i and V f (at the start and end of the time step) are known. The average V is then: (11) Solving the system of two ODEs from the start of the time step to its end, we obtain P and at the end of the time step. Substituting back into the EOS we also get E.
We ignore the strength of the porous material. The parameters of this EOS (with the usual notation)are: Next we show results of several planar impact 1D runs, with different values of the coefficient A , the initial porosity 0 , and the incoming shock pressure P in . The target is 100 mm long, and the mesh is 10 cells/mm. The incoming shock is applied by a pressure boundary condition at x=0, and the right boundary at x=100 mm is free.
In figure 1 we show pressure histories at four Lagrange distances into the target every 20 mm. The incoming shock is 20GPa, the initial porosity is 20%, and A =0.005 (GPa*μs) -1 . We see from figure 1 that like in viscoplastic response, here too there is a precursor decay phenomenon, and the rate of decay depends on the value of the coefficient A . The last two curves show the effect of the release wave from the free boundary. We see that at x=80 mm the pores did not quite close before the release wave arrived.
In figure 2 P in and 0 are the same, and we show the influence of the rate coefficient A . We see from figure 2 that as A increases, precursor decay is faster, and the history curves become steeper.
In figure 3 P in is the same, A =0.010 (GPa*μs) -1 , and the initial porosity is 10, 20 and 30%. We see from figure 3 that as expected, for higher initial porosity the steady precursor is lower, and the rise time is higher. In figure 4 we go back to 20% porosity, and the incoming shock is 10, 20 and 30 GPa. We see from figure 4 that the stronger the incoming shock, the steeper are the history curves.
Finally we show in figure 5 the U(u) relation obtained for 0 =20% and A =0.010 (GPa*μs) -1 , where u is the particle velocity at the plateau level, and U is the speed of arrival of the plateau level. . U(u) plot for 20% initial porosity and A =0.010 (GPa*μs) -1 . u is the particle velocity at the plateau level. U is the speed of the arrival of the plateau level.
Summary
We focus on the dynamic compaction of porous materials. We use Herrmann's suggestion to express the EOS of the porous material in terms of the EOS of the matrix material and the porosity. For a compaction law we use the overstress approach, referenced to the rate independent compaction law derived from Carroll and Holt's quasi-static spherical shell model. This makes our compaction law rate dependent.
We express our compaction model equations by rate equations, and implement them into a hydro code. We run examples of planar impact of a porous stainless steel target. Using pressure history plots at locations down the target we show the following: Pore compaction precursor decay. Influence of the rate dependence parameter. Influence of the initial porosity. Influence of the incoming shock level. U(u) plot for the plateau arrival. Our pore compaction model still needs to be validated and calibrated for different materials by appropriate tests. | 2019-04-14T13:06:31.462Z | 2013-07-08T00:00:00.000 | {
"year": 2014,
"sha1": "1f213052bde41e6f999155943f16817978a9ad97",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/500/18/182030",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "831e3e691423de6f7156c11d648365bebd1266d8",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Physics",
"Materials Science",
"Engineering"
]
} |
240205292 | pes2o/s2orc | v3-fos-license | Cybersecurity in the Internet of Things in Industrial Management
: Nowadays, people live amidst the smart home domain, business opportunities in the industrial smart city and health care, though, along with concerns about security. Security is central for IoT systems to protect sensitive data and infrastructure, whilst security issues become increasingly expensive, in particular in Industrial Internet of Things (IIoT) domains. Nonetheless, there are some key challenges for dealing with those security issues in IoT domains: Applications operate in distributed environments such as Blockchain, varied smart objects are used, and sensors are limited in what comes to machine resources. In this way, traditional security does not fit in IoT systems. In this vein, the issue of cyber security has become paramount to the Internet of Things (IoT) and Industrial Internet of Things (IIoT) in mitigating cyber security risk for organizations and end users. New cyber security technologies / applications present improvements for IoT security management. Nevertheless, there is a gap on the effectiveness of IoT cyber risk solutions. This review article dis-cusses the, trends around opportunities and threats in cyber security for IIoT.
Introduction
The internet of things (IoT) aims at integrating the digital and physical universes into a distinct system, thus providing major business opportunities for several sectors such as industry, tourism and energy. It has created a new paradigm in which a net-work of machines and devices capable of communicating and collaborating with each other are driving new processes. However, IoT is fragile in terms of many security is-sues that are frequently highly demanding due to its complex context and a vast num-ber of tools, which present flaws in terms of resources [1]. IoT is a system where the Internet is linked to the physical world through sensors [2] and can be deemed as the management of a network of devices, home appliances and vehicles of the IoT that is challenging due to the dynamic nature of the linkage between devices, actors and re-source constraints [3,4] involving hardware, software, sensors, and connectivity which allows them to connect, gather and exchange data [5]. Central for for IoT is the "smart factory", in which comprehends diverse elements: person, process, intelligent object and technological ecosystem [6]. IoT embraces traditional Internet connectivity to likewise traditionally non physical devices such as cars and electric tools, just to mention a few. As well, IoT is strongly related to manufacturing, in order to produce high quality products at low costs by putting together Industrial Internet of Things (IIoT), cloud computing and big data analytics [7], including robots [8].
The IoT has therefore become prevalent over diverse sectors, with the Inter-net-of-Battlefield-Things (IoBT), or the Internet-of-Vehicles (IoV) [3,9] along with its security issues, contributing therefore to the increasing of cyber attacks. Therefore, lately, it has been a concern with cyber security in this domain, amid a lack of policy direction and lack of understanding of user values related to cyber security in terms of the IoT, while policy has not been guided by key stakeholder values [10].
Literature Review: key concepts
Due to the characteristics of both cyber attacks and the IoT systems, it is necessary to understand the discussed concepts before move forward to the major current trends on the issue.
Cyber security
Cyber security turned out to be a major worry, as we know that most of our usual objects can be connected to internet, which is paramount in our daily lives. In so, if it can be connected, therefore, can be accessed. The primary concern for cyber security relies thus upon intrusion detection [23], in where physical or cloud computer activi-ties are monitored through analysis of system vulnerabilities and activity patterns [24]. The attacks could assume the form of, for example, Denial-Delay-of-Service (DDoS) [13], malicious IPs [25] and Data Manipulation [26], with ensuing outcomes, such as loss of information, operational losses and health damage [22,27].
Internet of Things
As already aforementioned, the Internet of Things (IoT) can be described as a new theme that encapsulates both the prevailing internet and the physical artifacts [1]. We can mention, for instance the 'smart home' [11] referring to home automation, manu-facturing systems as the industrial process, and health in terms of hospital automation [11] In this vein, IoT heavily augments the multiple gadgets and connected devices on our lives, for instance in smart grids [11] and in transportation through Electric Vehi-cles (EV) [3]. Thus, the internet technology, although presenting countless advantages, poses serious threats, as well [28]. IoT applications cover consequently a wide range of artifacts, from a smart home [25] to a huge smart factory [6], of smart grids [13]. In all mentioned cases, the correspondent devices are complemented with wireless interfaces of wireless sensor network (WSN) that constitutes a key IoT technology [1,2] to the wide stream of IoT systems, in particular 'smart grid', 'internet of thing' , 'manufac-turing systems', 'smart cities', and 'cloud computing in transport and smart homes [6,8,11,25] In the one hand, in the case of Smart Home, it is advisable to protect sensors identities from being recognized through wireless communication environment networks, while keeping the software up to date, from trustable vendors and cloud providers [1].in the other hand, in the case of smart cities, to which many population will tend to migrate, IoT offer multiple services such as smart parking, environmental, waste, wa-ter and traffic management and energy consumption monitoring, through operations that comprehend across IoT, its energy and architecture efficiency, mitigating its en-vironmental effects, always aware of its context interplay [26,29].
The Industrial Internet of Things (IIoT)
IIoT presents diverse nuances that differentiate it from traditional IoT. While the IoT operates in domestic environs, IIot operates in industrial environ. In this way, it copes with, for example, the optimization of supply chains, as to say, IIoT equals In-dustry 4.0 [30], which means a shared term for technologies and theories of value chain organization [18,31]. Industry 4.0 presents a modular structure, through which computers monitor and manage smart factories and its ensuing physical processes [32], creating a digital copy of the physical processes, while making decentralized de-cisions [33]. Along the process, computer systems interact one to another and people at once [30].
Also, both organizational and inter organizational services can be provided to ac-tors of the supply chain, interconnected objects, managed and accessed through data mining processes like Blockchain can be partly accessed and function as sensors and are enabled to interact with other devices [34,35]. Such system, made up of smart ar-tifacts within the IoT system demands minimal or none human action to exchange and produce data, often assisted by Artificial Intelligence mechanisms [36]. To summarize, the IIoT major concerns include reduce material and energy consumption, better managing the temporal dimensions of security in terms of 'intrusion detection', cloud computing and the interface between supply chain management and marketing pro-cesses; and better managing the complexity of infrastructures in terms of number of entry points [11,18,32,34,37,38].
The IIoT assembles therefore, both cyber security and IoT concerns in general. It focus on integrity, in which data is protected from modification by unauthorized party and authentication, in which the data source is verified as the pretended identity [39]; privacy, in which users' identities are non-traceable from their behaviors [40] confi-dentiality, in which information is made unintelligible to unauthorized entities, and Availability, in which the system services are available only for legitimate users [41].
IIoT faces thus important challenges in terms of, for example, operations in de-centralized environs such as Blockchain systems [42,43] and varying nature of smart artifacts [44]. Also, it is noteworthy to mention the sparse computational resources and power available to the diverse sensors that result in insufficient traditional security measures [9,45]. Those aforementioned issues augment the chances of cyber attacks to IoT systems, namely plants, transportation and household appliances [9], demanding substantial improvement in terms of authentication from remote systems, encryption from new sensors, and web interface and computer software for intrusion detection [46]. Additionally, the more IoT innovation, the more development in wireless tech-nologies, as well, such as the 5G, optimized well beyond voice and data, offering thus a vast array of opportunities [15,47].
The literature review we present in this piece of literature, also suggest a set of security solutions for cordless sensor networks with respect to IoT [48,49,50]. In par-ticular, in terms of network computing, of decentralized architectures, made up of countless objects [15], such as the Blockchain [25] and cloud computing systems that ease network management and configuration [51,52], ameliorating thus the IoT secu-rity [53], through sensors that optimize the sending of data, avoiding thus the redun-dancy in the wireless channels by systems such as big data that improve networking [18,30,54,55].
The design of the conceptual and technological framework for this piece of litera-ture was not made randomly, as we did a preliminary search on Scopus with the key-words "Internet of Things" and "Cyber Security", which results are presented and dis-cussed in the following sections.
Materials and Methods
This investigation uses a Systematic Review of Bibliometric Literature (LRSB) as proposed by Rosário and Raimundo [56], Raimundo and Rosário [57], Rosário et al., [58]. This qualitative approach analyzes and synthesizes documents on cybersecurity in the internet of things in industrial management that clearly indicate determining contexts the purpose of research through rigorous and precise design. Summarizing and combined relevant studies, thus expanding usable knowledge in decision-making and strategies. The main advantage of qualitative research is to allow the collection and analysis of data of cybersecurity factors in the internet of things in industrial management. LRSB are designed to be methodical, explicit and playable. This type of study provides guidance for the development of sketches, indicating new methods for future investigations and identifies which research methods have been used. With this methodology, it intended to build new knowledge about the context of cybersecurity in the internet of things in industrial management.
The LRSB process was carriers out, divided into 3 phases and 6 stages (
Phase
Step Description
Exploration
Step 1 problem of research Step 2 search of appropriate literature Step 3 the critical precision of the chosen studies Step 4 synthesis of data from individual sources
Interpretation
Step 5 reports and recommendations
Communicatio
Step 6 presentation of the LRSB report The database of indexation of scientific and/or academic documents was SCOPUS, the most important peer review of the scientific and/or academic environment. With nearly 19,500 titles from more than 5,000 international publishers, covering the cov-erage of 16,500 peer-reviewed journals in the scientific and/or academic fields.
However, we consider that the study has the limitation of considering only the SCO-PUS indexing database, excluding the other scientific and academic indexing da-tabases.
Bibliographic research includes peer-reviewed scientific and/or academic documents published by September 2021. The initial search involved the keyword "Cyber Security" and "Internet of Things" to track summaries, titles and keywords. 15,748 documents were identified using the keyword "Cyber Security" reduced to 1,316, add-ing the keyword "Internet of Things". The research was later limited to the research area "Business, Management and Accounting" to caution that only the most relevant research (Table 2).
Finally, content techniques and thematic analysis were users to recognize, analyze and report the various documents proposed by Rosário Source: own elaboration. Of the 60 documents selected, 28 documents are conference paper; 24 articles; 4 reviews; 3 books; and 1 book chapter and short survey.
Publication distribution Peer-reviewed articles on the topic be period 2014-2021. The year 219 were the one with the most peer-reviewed publications on the subject, reaching 15. Year In Table 3 we analyze for the Scimago Journal & Country Rank (SJR), the best quartile and the H index by publication.
International Journal Of Information Management is the most quoted publication with 2,770 (SJR), Q1 and H index 114.
There is a total of 7 journals on Q1, 4 journals on Q2 and 7 journals, Q3 and 5 journal on Q4. Journals from best quartile Q1 represent 15% of the 48 journals titles; best quartile Q2 represents 8%, best quartile Q3 represents 15%, and finally, best Q4 represents 10% each of the titles of 48 journals. Finally, 25 of the publications repre-senting 52%, the data are not available.
As evident from Table 3, the significant majority of articles on of Cybersecurity in the Internet of Things in Industrial Management on the Q1 best quartile index. (14); Economics, Econometrics and Finance (7); Energy (5); Medicine ; Environmental Science (3); Mathematics; and Physics and Astronomy.
The most quoted article was "Blockchain technology innovations" with 155 quotes published in the 2017 IEEE Technology and Engineering Management Society Conference, TEMSCON 2017, 0,210 (SJR), not yet assigned quartile and with H index (6).
The published article focuses the study demonstrates the use of Blockchain tech-nology in various industrial applications.
In Figure 2 In Figure 3, a bibliometric study was performed to examine the development of scientific information by the main keywords. The study of bibliometric outputs by the scientific software VOSviewe, aims at identifying the main research keywords "Cyber Security", "Internet of Things".
The research relied upon the studied articles on consumer marketing strategy on ecommerce in the last decade. The correlated keywords can be viewed in Figure 4 al-lowing to making clear the network of keywords that appear together / linked in each scientific
Discussion
The aforementioned topics related with cyber security in IIoT emerge in literature under distinct subthemes, such as, for example, machine learning and cloud compu-ting, through several applications related to security. These abovementioned concepts have been widely deployed to solve important issues and highlight principal authors, particularly Ahram [20] and Ardito [32] ( figure 5). Also, the key themes that under-score the current debate are illustrated above (Figure 3 and Figure 4):
Cyber Security
Cyber security, as already discussed, has focused primarly on securing distinct data form phisical and cloud threats. It deals with the cyber security threats to digital infrastructure and are a concern for the mantenance of business growth amidst a sce-nario of changing technologies of social, mobility, analytics and cloud (SMAC) do-mains and internet of things (IOTs), demanding the validation of new cyber security capabilities [39]. It focus on the users' susceptibility to cyberattack, how different fac-tors e.g. users' competence to deal with online threats, mediate the relationship in IoT [40]; it elicit major significant threat drivers and identify emerging technologies, e.g. encryption and blockchain that are likely to have an impact on defense and attack ca-pabilities in cyber security [16].
Existent literature identify, as well, major platforms that could accommodate smart objects, such as smart home systems that are platforms for connecting sensors, which are consequently exposed to identity theft and need to be protected [1]; the is-sues of securing automated power consumption units, implemented by smart system technology in an environment controlled by IoT [41]; and review the most critical technologies, best practices, policies and security frameworks in different countries; relevant government, industry, civil society and academia [59]. Finally, some pieces of literature examine whether Cyber Security Law is justified, analyzing some countries e.g. China that need a cyber security regime [21];
Machine Learning
Closely related to cyber security is the issue of machine learning that includes artificial inteligence. This theme is focused on intelligence for energy management, in-cluding production systems, its cyber security in industry 4.0 and internet of things [49]. It is much centred on the interplay between the feature selection and the inter-pretation steps in a machine learning workflow, aiming at intrusion detection in IoT networks [24]. It resorts often to artificial intelligence (AI) techniques for recognizing a cyberattack in internet-connected systems domains such as smartphones or robotic factories and on what to do in the appearance of an incident through data mining ap-proaches, in which AI will improve cyber counter measures [8]. Others intend to in-volve AI in business, assisting in adopting a strategy that is rational, relevant, and practical, across enterprise functions including disruptive technologies such as IoT, Blockchain, and cloud computing [36];
Internet of Things (IoT)
IoT is central on this debate and its influence extends to industry (IIoT), network and cloud computing. This issue has shed light on the implementation of intrusion de-tection systems, able to protect data and physical devices, through for instance AI that allows an intelligent intrusion detection model to detect threats, through Decision Trees for network intrusion detection [23]. It operates also to household participants, to obtain control over the intelligent IoT agents operating in their personal spheres [60]; to executives, in providing businesses with an approach for securing an enterprise by a dynamic architecture of Extended Risk-Based Approach on Cloud and IoT [61]; and in general to all virtually paperless work environments [62], aiming at threats re-lated with distributed denial of service (DDoS) on power grids and hacking of indus-trial control systems (ICS) along with the ensuing regulatory responses [13].
IoT theory also search for solutions related with how supply chains as a whole may benefit from the adoption of 4.0 technologies, delivering the flexible response customers want, and benefit from big data, cloud computing and cyber security by an improved communication system [30]. For example, it aims at analyzing devices and network security, while considering different scenarios involving varying attackers intending to destroy the IoT wireless network [17], whereas applying learning curves to major global cyber-attacks [42].
Other stream of literature explores what technologies are being deployed and where the organizational risk is being considered within the organization, buildign a risk model to deal with AI, IoT and distributed ledger [43], while offered a detailed study of trust management models to enforce different security measures in IoT sys-tem, ensuring thus safety to connected devices [44]. Including technologies such as Augmented reality (AR), a concept that connects the real world to the virtual world, to develop guidelines, to industry 4.0. [31], to tourism, in integrating business and key performance metrics to build a strategy for smart tourism [63].
It is notewhorty to mention the case of Electric Vehicles (EVs) cybersecurity issues in identifying the key matters, of the overall EVs that have been developed, but do not address the requirements of cybersecurity (e.g. the EV battery stacks) [3], on networks [4] and suggest strategies the auto industry might pursue in this subject to face cyber-security threats [28]. In the same vein, others detail the security vulnerabilities of un-manned ships, subsequent defense strategies and ensuing countermeasures [55], while signaling vulnerabilities of wireless systems of software radios [45].
To summarize, as the Smart Devices are growing in number, there is a corre-sponding growth in risks, both to the user and to the internet as a whole by the hacking threats [54], whereas there is a lack of policy direction, user values on cybersecurity are misunderstood and there is a lack of clarity as to how IoT public policy should be de-veloped, as for instance being guided by stakeholder values [10]. Moreover, a new paradigm of proactive antifragility for cyber defence approaches is demanded in IoT, for instance in distributed computing paradigms of high complexity, beyond tradition-al cyber defence .e.g. the Internet of Battle Things (IoBT) [9], able to cope with novel threats [53].
Finally, others, propose solutions to new wireless challenges such as the 6G, shifting the paradigm "from Internet of Things (IoT) to Internet of Intelligence (IoI)", to provide connectivity, while mantaining the ability to process knowledge and make de-cisions autonomously [47]. Developing thus a novel methodology for fingerprinting IoT devices, by building data-driven techniques rooted in machine learning methods, which allows to unveil compromised IP addresses throughout diverse geographical areas [5]. Also it focus on the issue of distributed denial of service (DDoS) scope, in terms of classifications and opportunities for attacks, particularly in the health sector, of limited security [22], while examining the changing legal environment in the IoT regulatory context [48];
Industry 4.0 (IIoT)
Industry 4.0 is another important subtheme related with IoT. It is also known as Industrial Internet of Things (IIoT). Some literature investigate the influence of critical technologies such as Artificial Intelligence, Big Data and Virtual Augmented Reality on the circular economy e.g. recycling and reduction of waste and emissions, which con-firms the importance of Industry 4.0 for improving circularity [18]. Others enhance the impact of those digital technologies on e-finance, an opportunity to change business models, for example through AI [27]; develop such digital technologies for managing the interface between supply chain management and marketing processes in sustain-ing supply chain management marketing (SCM-M) integration [32]; focus on the oil and gas industry in terms of treaths on the migration of sensitive business data to the cloud digital platforms in industrial processes, which include decision-making pro-cesses and procedures [33] and the increasing of entry points for organizations to de-fend from threats [34].
Other stream of literature spots the research gaps in industry 4.0, using an open (Google) internet-based research search engine (OIBRSE) to acquire the digital object identifiers and universal resource locators if the DOI non-exists with research articles [53], namely regarding current debates around, for example the issue of Smart factory, which bases on ICT technology, used to drive down manufacturing costs and time, while security vulnerabilities must be reduced [6]. Moreover, with regard to this sort of critical infrastructures, securing electro-energy platforms represents an important demand for a secure platform in monitoring and control Electric Vehicle Recharge systems [52].
The main issue is always centred on how cyber security deal with the cyber-attacks for industry 4.0, while mapping current topics, such as 'cloud compu-ting', 'smart grid', 'intrusion detection', 'privacy', 'internet of thing', and 'smart cities' [11], in order to keep up with this technological paradigm shift and to introduce measures that prevent significant expected fatalities [51], over disparate sectors, from e-commerce to banks, in how to cope with digitalization, keeping customers at priority [37] and in which implementation degree increases as the firm size increases, among different manufacturers [7]. In the end, the Industrial Revolution 4.0 will have eco-nomic, social, and political consequences at global level, in causing revolutionary changes in the intelligent processes of goods production and services, with likewise rising unemployment and social stratification [38];
Blockchain and cloud computing
Decentralized architectures of IoT devices have been a current debate. Last achievements on blockchain technologies, for example, allowed the use of a smart ecosystem, able to support cybersecurity mechanisms across distinct sectors such as the smart homes installations, focusing on the immutability of users and devices as well as the dynamic and immutable management of blocked malicious IPs [25]; in multiple industrial applications, healthcare, finance, and government [20] and in terms of solutions for cyber security problems such as Accountability, Traceability and Identification [12].
The point is how to better improve security of systems architecture, in order to provide protection against malicious internal users and malware implanted inside the system, which can be solved through preventive safeguards, inherent to blockchain security architecture [19]. Blockchain, may contribute to privacy, security and non-repudiation, of an IoT system, through the large amount of data generated and variety of sensors and devices adopted [2], as the blockchain technology build a scala-ble and decentralized endto-end secure IoT system [14]. Also the IoT can be enhanced with an AI at the gateway level to detect and classify suspected activities [14]. Moreo-ver, Blockchain technology is also of use in parallel with cloud computing for higher education, in terms of primary infrastructure topology, putting together machine learning and artificial intelligence on training opportunities [35} Cloud computing is therefore highly correlated with Blockchain technology in what comes to preventing attacks against, for example, radio-frequency (RF) ena-bled hardware, Internet of Things (IoT) firmware, and wireless protocols [50]. Inter-connectedness of intelligent devices and the use of public networks is at the centre of debate with regard to smart sities due to interconnected services for their citizens, in where cyber security has become a major concern [29], namely in issues such as com-munication infrastructures, cloud computing, collaborative platforms, big data, smart health and energy management [26].
The discussion on cloud computing covers subthemes of cyber security of supply chain based on software and networks, to minimize risks of purchasing and disconnection of key machines from networks [46]. To summarize, 5G and 6G networks can provide novel communication networks infrastructure, although IoT systems, will remain with the same energy capacity for hackers take advantage of this weaknesses. There is a need for a system to identify and counter potential threats in those next generation networks and decentralised systems like Blockchain [15].
Conclusions
The IoT has been a key element for example, for smart manufacturing, smart cit-ies, smart health, smart grids and EVs. IoT and IIoT bridges thus physical artifacts and internet, either in our daily lives or in the industry environment. In the one hand, such linkage unveils countless opportunities, while in the other hand it exposes our infor-mation and behaviors to potentially hacking sensitive data and critical infrastructure.
Additionally, IoT produces huge amounts of information that need to be protected and that is linked to varied security risks, related with its interconnectedness, either through cloud computing, or Blockchain, for example in smart factories, smart homes and smart cities. In this way, due to the need of decision making and investment, cyber security must first focus on the varying weaknesses of IoT objects and further work on its security mechanisms such as privacy, access control, data storage and authoriza-tion, whereas organizations should adopt a cyber security strategy. Therefore, organi-zations need to keep up with the development of technologies to respond appropriate-ly to cyber security threats. This study fills a gap in the IIoT cyber security and intends to encourage further research on the topic.
Furthermore, arising technologies such as Blockchain may perform a central role in the future of cyber security in IoT an IIoT, while security will become more im-portant in future because the number of object of cordless connections augment in the short term and extends to virtually all areas of our daily lives that need to be effec-tively managed.
Author Contributions: : Ricardo J. G. Raimundo and Albérico M. Rosário. All authors have read and agreed to the pub-lished version of the manu-script.For research articles with several authors, a short paragraph specifying their individual contributions must be provided. The following statements should be used "Conceptualization, X.X. and Y.Y.; methodology, X.X.; software, X.X.; validation, X.X., Y.Y. and Z.Z.; formal analysis, X.X.; investigation, X.X.; resources, X.X.; data curation, X.X.; writ-ing-original draft preparation, X.X.; writing-review and editing, X.X.; visualization, X.X.; su-pervision, X.X.; project administration, X.X.; funding acquisition, Y.Y. All authors have read and agreed to the published version of the manuscript." Please turn to the CRediT taxonomy for the term ex-planation. Authorship must be limited to those who have contributed substantially to the work reported.
Funding: This research received no external funding.
Data Availability Statement:
In this section, please provide details regarding where data supporting reported results can be found, including links to publicly archived datasets analyzed or generated during the study. Please refer to suggested Data Availability Statements in section "MDPI Research Data Policies" at https://www.mdpi.com/ethics. You might choose to exclude this statement if the study did not report any data.
Conflicts of Interest:
The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results. | 2021-10-30T15:11:06.833Z | 2021-10-21T00:00:00.000 | {
"year": 2022,
"sha1": "b10d3ef44f16f42999cbcbfe9dd65384775c1666",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-3417/12/3/1598/pdf?version=1643805455",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "3a879050e6cbffeaf4af1e2b799edd738b183aad",
"s2fieldsofstudy": [
"Computer Science",
"Engineering",
"Environmental Science"
],
"extfieldsofstudy": []
} |
165074052 | pes2o/s2orc | v3-fos-license | Chitosan nanoparticles on a natural zeolite as an efficient adsorbent for Congo red
Congo red is a toxic synthetic dye that cannot be readily degraded by conventional methods, thus posing a risk to the environment. The increased use of this dye is mainly due to developments in the textile industry. A possible way to reduce the Congo-red waste is by performing the adsorption processes; therefore, in this study, we used a natural zeolite from Bayah and modified it using chitosan nanoparticles (Na-zeolite@chitosan) to increase its adsorption capacity. The obtained Na-zeolite@chitosan material, which was able to efficiently adsorb Congo red, was further characterized by ultraviolet−visible (UV−Vis) and Fouriertransform infrared (FTIR) spectroscopy as well as by transmission electron microscopy (TEM). A maximum of 98.019% of the Congo red was adsorbed at a concentration of 800 ppm and a pH of 5 for 60 min (the adsorption capacity was 0.00428 mmol/g). Our results exhibit that Congo red adsorption on Na-zeolite@chitosan follows the Freundlich adsorption isotherm and can be described using a pseudo-second-order kinetic model.
Introduction
Today, textiles have become a major export commodity in several countries; however, their production exhibits a negative effect on the environment because it generates nondegradable, toxic, and stable synthetic dyes [1]. One of these dangerous dyes is Congo red, which can be barely decomposed in nature. This compound exhibits a complex aromatic chemical structure and is therefore physicochemically and thermally stable [2], and it exhibits carcinogenic properties [3].
Chitosan is a chitin deacetylation biopolymer that can be used as an adsorbent for heavy metals and dyestuff such as Congo red [20]. Other dyestuff adsorbents include cetyltrimethylammonium bromide [2] and chitosan nanoparticles, which can be used as adsorbents for acid orange and acid red [21,22].
To the best of our knowledge, zeolites that were modified using chitosan nanoparticles have not been used as Congo-red adsorbents until now. In this study, we modified a zeolite with nanochitosan and tested it as a Congo red adsorbent by monitoring several parameters such as pH, reaction time, and dye concentration. We also determined the adsorption isotherm and studied the kinetics of the adsorption process.
Activation of natural zeolite
Natural zeolite was physically activated by washing in double-distilled water at 70 °C for 1 h. For chemical activation, NaOH and HCl were added to the natural zeolite, and the zeolite cation was uniformed by adding 1 M NaCl. The mixture was further stirred at 70 °C for 6 h and was then precipitated for 12 h. The precipitate was finally dried at 105 °C. The chloride ions of the zeolite were eliminated by washing in double-distilled water.
Synthesis of chitosan nanoparticles
Synthesis of chitosan nanoparticles was conducted by adding NH3 and 0.01 M CH3COOH solution to chitosan powder. The mixture was further stirred for 2.5 h. The formation of chitosan nanoparticles was detected by the appearance of a white colloid.
Modification of the zeolite with chitosan nanoparticles
The zeolite was mixed with the chitosan-nanoparticle solution. This mixture was then stirred for 2.5 h, centrifuged to form the precipitate, and dried using N2.
Na-zeolite@chitosan nanoparticles as adsorbents for Congo red
The obtained material was tested by adding 10 mL of 800 ppm Congo red to 0.1 g Na-zeolite@chitosan nanoparticles at pH 5 and stirring the mixture several times for 60 min. Then, ultraviolet-visible (UV-Vis) spectroscopic experiments were performed between 200 and 800 nm. The kinetic model for Congo red adsorption was determined by monitoring the decrease in absorbance at 499 nm [23]. In addition, Langmuir and Freundlich isotherms were fitted to the experimental data to describe the adsorption processes at the equilibrium point.
Characterization
A Shimadzu 2600 UV-Vis spectrophotometer was used to obtain the absorption spectra describing the Na-zeolite@chitosan nanoparticle-Congo red adsorption activity. Identification of the functional groups present in the natural zeolite was accomplished by Fourier-transform infrared (FTIR) spectrometry (PerkinElmer) in the range of 4,000-400 cm −1 . A JEM 1400 transmission electron microscope was used to determine the structure and size of the Na-zeolite@chitosan nanoparticles.
Results and discussion
The uniformity of natural zeolite cation aims to homogenize the cations in zeolite pores. It can facilitate ion exchange in the Congo red adsorption process. The results of the FTIR characterization are presented in figure 1a. Both Na-zeolite@chitosan nanoparticles and Na-zeolite show different peaks at 3,624 cm −1 , corresponding to N-H stretching in chitosan. There is also a signal corresponding to the -OH group at 3,422 cm −1 . These are the two main functional groups of chitosan. The functional groups -NH2 and -OH give the Na-zeolite@chitosan nanoparticles their adsorption ability. The obtained FTIR spectra show that Na-zeolite has been successfully modified with chitosan nanoparticles. figure 1b, which were carried out to determine the particle shape and size. Figure 1b shows the shapes and sizes of the Na-zeolite@chitosan nanoparticles. As can be seen, they have an irregular and almost spherical shape, and their size varies from 20 to 40 nm. These results show that the synthesized adsorbent is composed of chitosan nanoparticles. The presence of these nanosized structures leads to an increase in the number of active sites, thereby improving the adsorption properties of the material.
Congo red was adsorbed on the Na-zeolite@chitosan nanoparticle material, and the adsorption activity was monitored by following the decrease in absorbance of the peak observed at 499 nm. The relationship between UV-Vis absorption spectra and time is shown in figure 2a. As can be seen, absorbance decreases at longer times, indicating the adsorption of Congo red. The Na-zeolite@chitosan nanoparticles absorbed 98.019% of Congo red at pH 5 after 60 min. The adsorption capacity was 0.00428 mmol/g at the optimum time of 5 min, as shown in figure 2b, and the adsorption process occurred because of electrostatic interactions between the SO3 -group of Congo red and the NH3 + group of nanochitosan.
Adsorption isotherms were calculated in order to determine the adsorption mechanism for the studied system. We found that Congo-red adsorption on Na-zeolite@chitosan followed Freundlich's adsorption isotherm with an R 2 value of 0.976. The corresponding graphic is shown in figure 3a. The kinetic model for Congo red adsorption on the Na-zeolite@chitosan material was determined by plotting ln [CR]0/[CR]I versus the reaction time, as shown in figure 3b. The results show that Congored adsorption follows a pseudo-second-order behavior with an R 2 value of 0.999.
Conclusions
A Na-zeolite@chitosan nanoparticle material with a particle size of 20-40 nm was successfully synthesized. The modified zeolite was applied as a Congo-red adsorbent, exhibiting an adsorption ability of 0.00428 mmol/g and achieving 98.019% Congo-red adsorption (for a concentration of 800 ppm) at pH 5 after 60 min. Congo-red adsorption can be described by Freundlich's isotherm and a pseudo-second-order kinetic model. | 2019-05-26T13:55:11.465Z | 2019-02-22T00:00:00.000 | {
"year": 2019,
"sha1": "532d9291e56da3e536bba9489324953cac7129ce",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1757-899x/496/1/012005",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "66225464a8efc1130f9218a34b4bd20c2ccb68cc",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
209420630 | pes2o/s2orc | v3-fos-license | Integrate multi-omics data with biological interaction networks using Multi-view Factorization AutoEncoder (MAE)
Background Comprehensive molecular profiling of various cancers and other diseases has generated vast amounts of multi-omics data. Each type of -omics data corresponds to one feature space, such as gene expression, miRNA expression, DNA methylation, etc. Integrating multi-omics data can link different layers of molecular feature spaces and is crucial to elucidate molecular pathways underlying various diseases. Machine learning approaches to mining multi-omics data hold great promises in uncovering intricate relationships among molecular features. However, due to the “big p, small n” problem (i.e., small sample sizes with high-dimensional features), training a large-scale generalizable deep learning model with multi-omics data alone is very challenging. Results We developed a method called Multi-view Factorization AutoEncoder (MAE) with network constraints that can seamlessly integrate multi-omics data and domain knowledge such as molecular interaction networks. Our method learns feature and patient embeddings simultaneously with deep representation learning. Both feature representations and patient representations are subject to certain constraints specified as regularization terms in the training objective. By incorporating domain knowledge into the training objective, we implicitly introduced a good inductive bias into the machine learning model, which helps improve model generalizability. We performed extensive experiments on the TCGA datasets and demonstrated the power of integrating multi-omics data and biological interaction networks using our proposed method for predicting target clinical variables. Conclusions To alleviate the overfitting problem in deep learning on multi-omics data with the “big p, small n” problem, it is helpful to incorporate biological domain knowledge into the model as inductive biases. It is very promising to design machine learning models that facilitate the seamless integration of large-scale multi-omics data and biomedical domain knowledge for uncovering intricate relationships among molecular features and clinical features.
Background
With the fast adoption of Next Generation Sequencing (NGS) technologies, petabytes of genomic, transcriptomic, proteomic, and epigenomic data (collectively called multi-omics data) have been accumulated in the past decade. Notably, The Cancer Genome Atlas (TCGA) Network [1] alone had generated over one petabyte of multi-omics data for comprehensive molecular profiling of over 11,000 patients from 33 cancer types. Multiomics data includes multiple types of -omics data, each of which represents one view and has a different feature set (for instance, gene expressions, miRNA expressions, and so on). Since multiple views for the same patients can provide complementary information, integrative analysis of multi-omics data with machine learning approaches has great potentials to elucidate the molecular underpinning of disease etiology. However, due to the "big p, small n" problem, many statistical machine learning approaches that require lots of training data may fail to extract true signals from multi-omics data alone.
Deep learning has achieved great success in computer vision, speech recognition, natural language processing and many other fields in the past decade [2]. However, deep learning models often require large amounts of annotated training data with clearly defined structures (such as images, audio, and natural languages), and cannot be directly applied to multi-omic data with unclear structures among features and a small sample size. Novel model architectures and learning strategies need to be invented to address the challenge of learning from multiomics data with heterogeneous features and the "big p, small n" problem.
In this paper, we present a framework called Multi-view Factorization AutoEncoder (MAE) with network constraints [3], combining multi-view learning [4] and matrix factorization [5] with deep learning for integrating multiomics data with biological domain knowledge. The MAE model consists of multiple autoencoders as submodules (one for each data view), and a submodule that combines individual views. The model facilitates learning feature and patient embeddings simultaneously with deep representation learning. Importantly, we incorporate domain knowledge such as biological interaction networks into the model training objective to ensure the learned feature embeddings are consistent with the domain knowledge.
Besides the molecular interaction networks, we can construct multiple patient similarity networks based on the learned patient embeddings from individual views. We included patient similarity network constraints to ensure these similarity networks for the same set of patients are consistent with each other. Equipped with feature interaction and patient similarity network constraints, our model achieved better performance than traditional machine learning methods and conventional deep learning models without using domain knowledge on the TCGA datasets [1].
Related work
Many genetic disease studies focus on molecular characterization of individual disease types [1,6], employing mainly statistical analyses to find associations among molecular and clinical features. Machine learning has been applied to individual -omics data types [7] and to integrate multi-omics data [8,9]. Because most existing deep learning models cannot handle the "big p, small n" problem effectively, many traditional machine learning methods (such as logistic regression [7], random forest [8], and similarity network fusion [9]) have been applied to -omics data.
Comprehensive multi-omics data analysis with machine learning has been a frontier in cancer genomics [1,10,11]. Unsupervised clustering approaches (such as iCluster [12], SNF [13], ANF [14], etc.) are popular for multiomics data analysis as annotated labels are often lacking in biomedical data. Probabilistic models [12] and networkbased regularization [15] have been employed to learn from multi-omics data. Recently, deep learning has been applied to sequencing data [16,17], imaging data [18], medical records [19], etc. However, most existing deep learning methods focused on individual data types instead of integrating multi-omics data. Multi-view learning provides a natural framework for learning from multimodal data. Typical techniques for multi-view learning include co-training, co-regularization, and margin consistency approaches [4]. Combining deep learning with multi-view learning more effectively is still active research [4]. There are multiple ways to incorporate biological networks as inductive biases into a deep learning model. Besides network regularization approaches, directly encoding biological networks into the model architecture is also possible [20,21], which usually requires subcellular hierarchical molecular networks as the prior knowledge. Because high-quality human data is lacking (human biological interaction networks such as protein-protein interaction networks are still incomplete and noisy), network regularization approaches are often preferable to directly encoding the noisy interaction network into the model architecture.
Multi-modality deep learning [22] has been successfully applied to integrate audio and video features [23] by employing shared feature representations. However, many multi-modality deep learning models still rely on large amounts of training data and do not facilitate knowledge integration. Our method can learn feature and patient embeddings simultaneously with the integration of domain knowledge to learn robust and generalizable deep learning models.
Many multi-view learning techniques have been proposed [24,25]. Many of these methods learn transformations that map each view to a latent space and reconstruct the original data from the latent space representation (i.e., adopting an AutoEncoder architecture). Importantly, they may add additional constraints to ensure the latent features for multiple views are highly correlated [24]. Our model also adopted the Multi-view AutoEncoder architecture as the model backbone, but we chose different regularization schemes for incorporating domain knowledge as inductive biases into the model. We do not assume the latent spaces learned for each view to be "canonically correlated". Instead, the learned feature representations should be consistent with the domain knowledge such as gene-gene and miRNA-miRNA interaction networks. As the gene-gene interaction network and the miRNA-miRNA interaction network are very different, the corresponding gene and miRNA feature interactions can be very different as well. Importantly, we are focusing on the multi-omics data, of which each feature (such as a gene) has a clear biological meaning and the feature interactions have been captured as domain knowledge, while many other proposed multi-view learning methods deal with data without "biologically meaningful" features (for example, in image data, individual pixels are not informative at all, but their arrangement and structure do contain information). While other widely used multi-view learning methods [22,24,25] focus on how to effectively utilize feature correlation among different views to improve model performance, our main focus in this paper is to demonstrate that biological interaction networks as an "external" domain knowledge source can be effectively incorporated into deep learning models through network regularization to improve model generalizability for multi-omics data analysis.
Our main contribution can be summarized as follows. We proposed a Multi-view AutoEncoder model with network constraints for the integrative analysis of multiomics data. Our model learns good representations for both molecular entities and patients simultaneously and facilitates mining relationships among molecular features and clinical features. Most importantly, we demonstrated that "external" domain knowledge sources such as biological interaction networks can be incorporated into the model as inductive biases, which could improve model generalizability and reduce the risk of overfitting in the "big p, small n" problem. We devised novel network regularizers that will "force" the learned feature representations to be consistent with domain knowledge, effectively reducing the search space for good feature embeddings. We have performed extensive experiments and showed that the models trained with domain knowledge outperformed those without using domain knowledge. Our work provides a proof-of-concept framework for unifying data-driven and knowledge-driven approaches for mining multi-omics data with biological knowledge.
Methods and implementation
Our method builds upon matrix factorization [5], multiview learning, and deep learning. We will describe each component in the following section.
Some notations
Given N samples and V types of -omics data, we can often represent the data using V samplefeature matrices: M (i) ∈ R N×p (i) , i = 1, 2, · · · , V . Each matrix corresponds to one data view, and p (i) is the feature dimension for view i.
Before describing Multi-view Factorization AutoEncoder, we first discuss how to process individual views. For ease of description, we drop the superscript (·) when dealing with a single view. For matrix M, M ij represents the element of ith row and jth column, M i,· represents the ith row vector, and M ·,j represents jth column vector.
Let M ∈ R N×p be a feature matrix, with each row corresponding to a sample and each column corresponding to a feature. The features are often not independent. We represent the interactions among these features with a network G ∈ R p×p . For instance, if these features correspond to protein expressions, then G will be a protein-protein interaction network, which is available in public databases such as STRING [26] and Reactome [27]. G can be an unweighted graph or a weighted graph with non-negative elements. Let D be a diagonal matrix with D ii = p j=1 G ij , then the graph Laplacian of G is L G = D − G.
Low-rank matrix factorization
Matrix factorization techniques [5] are widely used for clustering and dimensionality reduction. In many realworld applications, M often has a low rank. As a result, low-rank matrix factorization can be used for dimensionality reduction and clustering: Some additional constraints are often added as regularizers in the objective function or enforced in the learning algorithm to find a good solution {X, Y}. For instance, when M is non-negative, Non-negative Matrix Factorization (NMF) [28] is often a "natural" choice to ensure both X and Y are non-negative.
Generally speaking, the objective function can be formulated as follows: arg min In Eq. 1, R(X, Y) is a regularization term for X and Y. For instance, R(X, Y) can include L 1 and L 2 norms for X and Y. In addition, structural constraints based on biological interaction networks can also be incorporated into R(X, Y). Interpretation X ∈ R N×k can be regarded as a samplefactor matrix and the inherent non-redundant representation of N samples, with each column corresponding to an independent factor. These k factors are often latent variables. Y ∈ R k×p can be seen as a linear transformation matrix. The k rows of Y can be regarded as a basis for the underlying factor space. The observable feature matrix M is generated by a linear transformation Y from X. In a sense, this formulation can be seen as a shallow linear generative model.
Limitations
The limitations of matrix factorization techniques often stem from their "shallow" linear structure with a limited representation power. In many real-world applications, however, we need to learn a complex nonlinear transformation. Deep neural networks are often good at approximating any complex nonlinear transformations with appropriate training on a sufficiently large dataset.
Non-linear factorization with AutoEncoder
As simple matrix factorization techniques are limited to model complex nonlinear relationships, we can use an Autoencoder to reconstruct the observable samplefeature matrix M, as it can approximate more complex nonlinear transformations well.
The entire Autoencoder is a multi-layer neural network with a encoder and a decoder. We use a neural network with parameter e as the encoder: Here X can be regarded as a factor matrix containing the essential information for all N samples. The encoder network will transform the observable sample-feature matrix M to its latent representation X. The decoder reconstructs the original data from the latent representation.
In our framework, for the convenience of incorporating biological interaction networks into the framework, the encoder (Eq. 2) contains all layers but the last one, and the decoder is the last linear layer. The parameter of decoder (Eq. 3) is a linear transformation matrix same as in matrix factorization: The input sample-feature matrix can be reconstructed as The reconstruction error can be computed as: M − Z 2 F . Different from matrix factorization-which can be regarded as a one-layer AutoEncoder, the encoder in our framework is a multi-layer neural network that can learn complex nonlinear transformations through backpropagation. Moreover, the encoder output X can be regarded as the learned patient representations for N samples, and Y can be seen as the learned feature representations. With the learned patient and feature representations, we can calculate patient similarity networks and feature interaction networks, and add network regularizers to the objective function.
Incorporate biological knowledge as network regularizers
We aim to incorporate biological knowledge such as molecular interaction networks into our model as inductive biases to increase model generalizability. Denote G ∈ R p×p as the interaction matrix among p genomic features, which can be obtained from biological databases such as STRING [26] and Reactome [27].
Since our model can learn a feature representation Y, this representation should ideally be "consistent" with the biological interaction network corresponding to these features. We use a graph Laplacian regularizer to minimize the inconsistency between the learned feature representation Y and the feature interaction network G: L G is the graph Laplacian matrix of G in Eq. 6. G ij ≥ 0 captures how "similar" feature i and feature j are. Each feature i is represented as a k-dimensional vector Y ·,i . We can calculate the Euclidean distance between feature i and j as Y ·,i − Y ·,j . The term Trace YL G Y T is a surrogate for measuring the inconsistency between the learned feature representation Y and the known feature interaction network G. When Y is highly inconsistent with G, the loss term Trace(YL G Y T ), which accounts for the level of inconsistency between the learned feature representation and the biological interaction network, will be large. Therefore, minimizing the loss function can effectively reduce the inconsistency between the learned feature representation and the biological interaction network.
The objective function incorporating biological interaction networks through the graph Laplacian regularizer is as follows: In Eq. 7, α ≥ 0 is a hyperparameter as the weight for the network regularization term. In practice, we normalize G and Y so that the Trace Y · L G · Y T is within the range of [ 0, 1]. In the implementation of our model, we set G F = 1, Y ·,i = 1 √ p , i = 1, 2, · · · , p (this also means Y F = 1). This facilitates easy multi-view integration since all the network regularizers from individual views are on the same scale.
Measuring feature similarity with mutual information
Eq. 6 uses Euclidean distance to measure the dissimilarity between learned feature representations. Euclidean distance relies on the inner product operator, which is essentially linear. The fact that two molecular entities interact with each other does not imply that they should have very similar feature representations or a small Euclidean distance. Mutual information can be a better metric quantifying if two molecular entities interact with each other.
Let's briefly review the definition of mutual information between two random variables X and Y.
For discrete random variable X ∼ P(x) (P(x) is the discrete probability distribution of X), the entropy of X is defined as H(X) = x P(x)log 1 P(x) . The observed sample-feature matrix M ∈ R N×p can be used to measure the pair-wise mutual information scores between feature i and j: MutualInfo(M ·,i , M ·,j ). However, due to measurement noise and error, this may not be accurate.
Ideally, the reconstructed signal with the proposed autoencoder model should reduce the noise in the data. Thus we can calculate pair-wise normalized mutual information scores using the reconstructed signal Z (Eq. 5): K can be regarded as a learned similarity matrix based on mutual information. Again we want to ensure that the learned similarity matrix is consistent with the known biological interaction network G. We can estimate the consistency between G and K as: is element-wise matrix multiplication. As G and K are normalized feature interaction network and pairwise feature mutual information matrix, the norm of their element-wise multiplication can be an estimate of the consistency between G and K. We inject this mutual information regularization term into Eq. 9: α, γ are non-negative hyperparameters. There are numerical methods to measure the mutual information between two continuous high-dimensional random variables. The simplest approach is to divide the continuous space into small bins and discretize the variables with these bins. In order to estimate mutual information from data accurately, a large sample size is needed. Due to the difficulty in accurately calculating mutual information based on a limited number of data points, we do not include mutual information term in the following discussion and leave this for future work.
Multi-view factorization AutoEncoder with network constraints
We have given the objective function for a single view in Eq. 7. For multiple views, the objective can be formulated as follows: Note we use a separate autoencoder for each view. We try to minimize the reconstruction loss and feature interaction network regularizers for all views in Eq. 10. Here can be seen as the learned latent representation for N samples. We can derive patient similarity network S (v) (which can also be used for clustering patients into groups) from X (v) . Multiple approaches can be employed to calculate a patient similarity network. For example, we can use cosine similarity: We can get a patient similarity network S (v) for each view v (Eq. 11 omits the superscript for clarity). Moreover, the outputs of multiple encoders can be "fused" together for supervised learning.
This idea is similar to ResNet [29]. Another approach is to concatenate all views together like DenseNet [30]. We have tried using both in our experiments and the results are not significantly different.
With the fused view X, we can again calculate the patient similarity network S X using Eq. 11. Moreover, since S X , andS (v) , v = 1, 2, · · · , V are for the same set of patients, we can fuse them together using affinity network fusion [14]: Similar to the feature interaction network regularizer (Eq. 6), we also include a regularization term on the patient view similarity: Here L S is the graph Laplacian of S. Adding this term to Eq. 10, we get the new objective function: For each type of -omic data, there is one corresponding feature interaction network G (v) . Different molecular interaction networks involve distinct feature sets and thus cannot be directly merged. However, patient similarity networks are about the same set of patients, and therefore can be combined to get a fused patient similarity network S using techniques such as affinity network fusion [14]. Our framework uses both molecular interaction networks and patient similarity networks for regularized learning.
Supervised learning with multi-view factorization autoencoder
With multi-view data and feature interaction networks, our framework with the objective function Eq. 7 can be used for unsupervised learning. When labeled data is available, we can use our model for supervised learning by adding another loss term to Eq. 7: The first term L(T, is the classification loss (e.g., cross entropy loss) or regression loss (e.g., mean squared error for continuous target variables) for supervised learning. T is the true class labels or other target variables available for training the model. As in Eq. 12, is the sum of the last hidden layers of V autoencoders. This also represents the learned patient representations combining multiple views. C is the weights for the last fully connected layer typically used in neural network models for classification tasks. The second term V v=1 is the reconstruction loss for all the submodule autoencoders. The third and four terms are the graph Laplacian constraints for molecular interaction networks and patient similarity networks as in Eq. 6 and Eq. 14. η, α, β are non-negative hyperparameters adjusting the weights of the reconstruction loss, feature interaction network loss, and patient similarity network loss.
A simple illustration of the whole framework combining two views with two-hidden-layer autoencoders is depicted in Fig. 1. The whole framework is end-to-end differentiable. We implemented the model using PyTorch (https:// github.com/BeautyOfWeb/Multiview-AutoEncoder).
Datasets
We downloaded and processed two datasets from The Cancer Genome Atlas (TCGA): Bladder Urothelial Carcinoma (BLCA) and Brain Lower Grade Glioma (LGG). 338 patients from the BLCA project and 423 patients from the LGG project were selected for downstream analysis, all of which have gene expression, miRNA expression, protein expression, and DNA methylation as well as clinical data available.
Target clinical variable
The main target variable is the Progression-Free Interval (PFI) event. PFI is a derived clinical (binary) outcome endpoint [31], which is relatively accurate and is recommended to use for predictive tasks [31]. PFI=1 implies the treatment outcome is unfavorable. For example, the patient had a new tumor event in a fixed period, such as a progression of disease, local recurrence, distant metastasis, new primary tumors, or died with cancer without a new tumor event. PFI=0 means the patient did not have a new tumor event or was censored in a fixed period. We are trying to predict the Progression-Free Interval (PFI) event using four types of -omics data (i.e., gene expression, miRNA expression, protein expression, and DNA methylation). As this is a binary classification problem, we used Average Precision and AUC (Area Under the ROC Curve) score as the main metrics to evaluate classification performances. The results using other metrics are similar.
Data preprocessing
We performed log transformation and removed outliers for gene features. Four thousand nine hundred forty two gene features were kept for downstream analysis after filtering out genes with either low mean or low variance. We removed features with low mean and variance for DNA methylation data. Four thousand seven hundred fifty three methylation features (i.e., beta values associated with CpG islands) were selected for analysis. We also performed log transformation and removed outliers for miRNA features. We removed nine protein expression features with NA values. In total, 10,546 features were selected for downstream analysis. For each type of features, we normalize it to have zero mean and standard deviation equal to 1.
Molecular interaction networks
We downloaded the protein-protein interaction network from the STRING (v10.5) database [26] (https://string-db.org/), which contains more than ten million protein-protein interactions with confidence scores between 0 and 1000. We filtered out most interaction edges with low confidence scores and selected about 1.5 million interaction edges with confidence scores at least 400. We extracted a subnetwork from this PPI interaction network for gene and protein expression features. Since the gene-gene interaction network is too sparse, we performed a one-step random walk (i.e., multiplying the interaction network by itself ), removed outliers and normalized it. For miRNA and methylation features, we first map to miRNA/methylation to gene (protein) features and then calculate a miRNA-miRNA and a methylation-methylation interaction network. Take miRNA data as an example. Let M miR−pro be the adjacency matrix for the miRNA-protein mapping (this matrix is derived from miRDB (http://www.mirdb.org) miRNA target prediction scores), and M pro−pro be the proteinprotein interaction network, then the miRNA-miRNA interaction network M miR−miR is calculated as follows: All the four feature interaction matrices are normalized to have a Frobenius norm equal to 1.
We randomly chose 70% of the dataset as the training set, 10% as the validation set, and the rest 20% as the test set. We trained different models on the training set and evaluated them on the validation set. We chose the model with the best validation accuracy to make predictions on the test set and reported the Average Precision and AUC score on the test set.
Experimental results
We compare our model with SVM, Decision Tree, Naive Bayes, Random Forest, and AdaBoost, as well as Variational AutoEncoder (VAE) and Adversarial AutoEncoder (AAE). Traditional models such as SVM only accept one feature matrix as input. So we used the concatenated feature matrix as model input. We used a linear kernel for SVM. We used 10 estimators in Random Forest and 50 estimators in AdaBoost.
For the Multi-view AutoEncoder (MAE) model with a classification head, we used a three-layer neural network. The input layer has 10,546 units (features). Both the first and second hidden layers have 100 hidden units. The last layer also has 10,546 units (i.e., the reconstruction of the input). We added a classification head which is a linear layer with two hidden units corresponding to two classes.
To facilitate fair comparisons, all of our proposed Multiview Factorization AutoEncoder (MAE) models share the same model architecture(i.e., two hidden layers each with 100 hidden units for each of the four submodule autoencoders), but the training objectives are different. Since this dataset has four different data types, our model has four autoencoders as submodules, each of which encodes one type of data (one view). Figure 1 shows our model structure (note in our experiments we have four views instead of only two shown in the figure). We combine the outputs of the four autoencoders (i.e., the outputs of the last hidden layers) by adding them together (Eq. 12) for classification tasks.
The training objective for the Multiview Factorization AutoEncoder (MAE without graph constraints) includes only the first two terms in Eq. 16. The objective for the Multiview Factorization AutoEncoder with feature interaction network constraints (MAE + feat_int) includes the first three terms in Eq. 16. The objective for the Multiview Factorization AutoEncoder with patient view similarity network constraints (MAE + view_sim) includes the first two and the last terms in Eq. 16. And the objective for the Multiview Factorization AutoEncoder with both feature interaction and view similarity network (MAE + feat_int + view_sim) constraints includes all four terms in Eq. 16.
As our proposed model with network constraints is endto-end differentiable, we trained it with Adam [32] with weight decay 10 −4 . The initial learning rate is 5 × 10 −4 for the first 500 iterations and then decreased by a factor of 10 (i.e., 5 × 10 −5 ) for another 500 iterations. Models with the best validation accuracies are used for prediction on the test set.
The Average Precision and AUC scores for Bladder Urothelial Carcinoma (BLCA) and Brain Lower Grade Glioma (LGG) using these models are shown in Tables 1 and 2 . Our proposed models (in bold font) achieved better Average Precision and AUC scores for predicting PFI on both datasets. Note that traditional methods such as Decision Tree do not perform as well as deep learning models. This may be due to the superior representation power of deep learning. The more recent Bayesian deep learning approach Variational AutoEncoder (VAE) did not achieve good results, while Adversarial AutoEncoder (AAE) achieved better results than other methods except our proposed method. Currently, the datasets contain a lot of noise (due to the nature of ) and the feature interaction networks derived from public knowledgebases are incomplete and noisy, too. If a larger dataset consisting of hundreds of thousands of patients is available, we expect our proposed model with more network constraints to be able to generalize even better.
Multi-omics outperforms single -omics
We trained our autoencoder models on each type of -omics data (i.e., gene expression, miRNA expression, protein expression, and DNA methylation), and compared them with those trained using multi-omics data (all the four types combined) using our proposed Multi-view Factorization AutoEncoder model. The results on BLCA and LGG datasets are shown in Tables 3 and 4, respectively. For both datasets, the results using multi-omics data (all four data types combined) significantly outperform those using a single type of -omics data.
Results on the tCGA pan-cancer dataset
We also performed experiments on the TCGA Pan-cancer dataset [1] consisting of 6179 patients with 21 different cancer types. In addition to predicting Progression-Free Interval (PFI) event, we also predict Overall Survival (OS) event. Similar to PFI, OS is another derived clinical We used the same data processing procedure and experimental settings as described above for the BLCA and LGG datasets. The average AUC scores (10 runs) for predicting PFI and OS using these models are shown in Table 5. Our proposed models (in bold font) achieved better AUC scores for both predicting PFI and OS than other traditional machine learning methods.
In order to study if the model architecture would significantly affect the results, we change the number of hidden layers from one layer to three layers. The number of units for each hidden layer is shown in Table 6. (Note we have omitted the input and output layers both of which have 10,546 hidden units). As shown in Table 6, the results are not significantly different. In addition, we had tried to use DenseNet [30] and ResNet [29] as the backbone of the autoencoders instead of multi-layer perceptrons. The results are also not significantly different and thus not presented here.
Learned feature embeddings preserve interaction network structure
Our proposed model learns patient representations and feature embeddings simultaneously. While patients are different from datasets to datasets, the genomic features (such as gene features) and their interaction networks are from domain knowledge, and thus are persistent regardless of which dataset we are using. Since we have a regularization term in the loss to ensure the learned feature embeddings are consistent with feature interaction networks, we would like to know if the model is able to learn an embedding that is "compatible" with the domain knowledge of interaction networks. We plotted the loss term T from one typical run of training our model with feature interaction network constraints in Fig. 2. This regularization term decreased to nearly zero very fast, which means the information from feature interaction networks is fully assimilated into the model, or more specifically, the weights of the decoders in the model. We found that many independent runs show very similar loss curves, which means the model is able to robustly learn a feature embedding that preserves the feature interaction network information.
Conclusion
While it is challenging for applying machine learning to multi-omics data with the "big p, small n" problem, biological domain knowledge can be incorporated into the machine learning model as inductive biases to alleviate potential overfitting problems. A number of knowledgebases (e.g., STRING [26], Reactome Pathways [27], etc.) contain the information for extracting biological interaction networks, which can be incorporated into various machine learning models. In this paper, we presented the Multi-view Factorization AutoEncoder (MAE) model with network constraints that can effectively integrate domain knowledge such as molecular interaction networks with multi-omics data for accurately predicting clinical outcomes. The MAE model consists of multiple factorization autoencoders as submodules for individual data types (views) and combines multiple views with their high-level abstract representations for supervised learning. The factorization autoencoder employs a deep architecture for the encoder and a shallow architecture for the decoder. This increases the overall model representation power and provides a natural way to integrate graph constraints into the model. Our model learns molecular and patient embeddings simultaneously. With effective network regularization techniques, we can learn good feature representations and consistent patient similarity networks and feature interaction networks.
The experimental results on the Bladder Urothelial Carcinoma (BLCA) and Brain Lower Grade Glioma (LGG) datasets and the TCGA pan-cancer dataset demonstrated that our proposed model with feature interaction network and patient similarity network constraints outperforms traditional methods and conventional deep learning models on predicting clinical target variables from multi-omics data. Our method can be applied to other large-scale multi-omics datasets to learn latent representations that are consistent with molecular interaction networks for various molecular entities. Besides multiomics data, our proposed method can also be applied to any other multi-view data with feature interaction networks.
The ultimate goal of multi-omics data integrative analysis is to disentangle complex factors and identify important factors that contribute to disease etiology. Our model learns distributed representations for various molecular entities and facilitates mining relationships among molecular features and clinical features. Essentially, learning good representations for both molecular and clinical features is fundamentally important to unravel the intricate relationships among them. Our work also provides a proof-of-concept framework for unifying data-driven and knowledge-driven approaches for mining multi-omics data with biological knowledge. We hope it can be applied to large-scale cancer genomics data and contribute to elucidating the etiology and mechanisms of cancer and other complex genetic diseases. | 2019-12-20T15:47:34.647Z | 2019-12-01T00:00:00.000 | {
"year": 2019,
"sha1": "8063b64a6f731b0a718e2e6e453ef1856d591963",
"oa_license": "CCBY",
"oa_url": "https://bmcgenomics.biomedcentral.com/track/pdf/10.1186/s12864-019-6285-x",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8063b64a6f731b0a718e2e6e453ef1856d591963",
"s2fieldsofstudy": [
"Computer Science",
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
264327451 | pes2o/s2orc | v3-fos-license | Effect of BMI on Sleep Pattern Among Non-Tribal Female College Students of Tripura: Cross-Sectional Study
This study investigates the relationship between sleep patterns as well as the quality of sleep and Body Mass Index (BMI) among non-tribal female college students of Tripura, India. The present research reveals that despite normal BMI values students suffer from poor sleep quality. The study also highlights the higher obesity patterns among the female tribal student community, primarily attributed to their dietary habits. This article emphasizes the need for further research in this area, considering the significant impact of quality of sleep on non-tribal students’ overall well-being and academic performance.
Introduction
One of the most crucial physiological needs is thought to be sleep.It is regarded as being the biological function of the human body that is most crucial.Many vital bodily processes, including muscle recovery from lactic acid build up, tissue repair, cognitive functioning, body cell and tissue growth and development, enhancement of cardiac function, body metabolism, etc., take place when we sleep.Sleep is thought to be quite advantageous for psychological factors as well.Sleep improves learning, memory, and other cognitive processes as well as mood.Additionally, sleep aids in regaining a healthy weight.It lessens stress, decreases the likelihood of significant health problems, and promotes social interaction.Poor sleep is associated with illnesses like obesity, mental illness, and cardiovascular disease.While having cardiovascular illness is associated with poor sleep, some evidence suggests that bad sleep may also play a role (Hale et al., 2020).Less than seven hours of sleep per night is associated with coronary heart disease and a higher risk of dying from the condition.According to Jackson et al. (2015), sleep duration beyond nine hours is also associated with coronary heart disease, stroke, and cardiovascular events.Short sleep duration is linked to an increased risk of obesity in both children and adults, with several studies finding a risk increase of 45-55%.Obesity has also been linked to other sleep-related issues, such as daytime naps, irregular sleep schedules, and poor sleep efficiency.The impact of sleep duration on obesity has, nevertheless, received the most research (Wang et al., 2017).Sleep issues are typically seen as symptoms rather than causes of mental illness (St-Onge et al., 2016).But mounting data indicates that they are both a root cause and a symptom of mental disorder.Major depressive disorder is significantly predicted by insomnia; a meta-analysis of 170,000 individuals revealed that insomnia at the start of a study period suggested a more than twofold increased risk for major depressive disorder.Additionally, several studies have found a link between sleeplessness and depression, post-traumatic stress disorder, and suicide.According to Hale et al. (2020), sleep disturbances can make psychotic episodes more severe and increase the likelihood of psychosis.Pittsburg Sleep Quality Index (PSQI) is a tool for evaluating sleep quality.The PSQI was developed in 1988 by Buysse and his colleagues to provide a clear index that both clinicians and patients can use.It is a standardized measure that was created to gather consistent information about the subjective nature of people's sleep habits.It rose in prominence as a tool for studying the potential links between sleep and bipolar disorder, depression, and sleep disorders.Researchers that work with persons from adolescent through old age increasingly employ the PSQI.Independent evaluations have endorsed the PSQI since it has amassed a significant body of scientific evidence.The measure has a great deal of potential for use in clinical practice in addition to showing promise in terms of reliability and validity (Currie, 2008).It has been translated into 56 other languages so far.[2] The PSQI is often referred to as BPSQI, where 'B' refers for Bengali (Tomfohr et al., 2013).With rising rates in both adults and children, obesity is a leading cause of death globally (WHO, 2015).In 195 countries in 2015, there were 600 million obese adults (12%) and 100 million obese children (Haslem et al., 2005).Women are more likely than men to be obese (WHO, 2015).Obesity was designated as a disease in 2013 by a number of medical groups, including the American Medical Association and the American Heart Association (Yazdi et al., 2015;Afshin et al., 2017).Obesity is a medical condition, occasionally referred to as a disease, in which excessive body fat has built up to the point where it may be harmful to one's health (Pollack, 2013).When a person's body mass index (BMI), which is calculated by dividing their weight by height squared, exceeds 30 kg/m2, they are considered obese; between 25 and 30 kg/m2 is considered overweight (WHO, 2015).According to Luppino et al. (2010), obesity is a significant contributor to disability and is linked to a number of illnesses and ailments, including osteoarthritis, type 2 diabetes, obstructive sleep apnea, and some types of cancer.The main variables for controlling obesity are thought to be physical exercise, a healthy lifestyle, and nutrition, although risk factors such inadequate sleep quantity and quality have gotten less attention.The goal of the current study is to determine whether obesity and poor sleep quality are likely to be related in non-tribal female college students in Tripura.The quantity and quality of the sleep that those state's college students get has received very little attention.Therefore, the current study will usher in a new era of understanding of college students' health standards in this regard.
Methods:
This cross-sectional survey was carried out between October 2022 and January 2023 in several colleges throughout West Tripura.Female college students were the subjects, and they were selected from nontribal groups.All of the subjects, who were college students between the ages of 19 and 21, gave their informed consent.The exclusion criteria included having at least one obese parent, taking medicine for a condition for longer than three months, smoking and drinking regularly, having a history of diabetes mellitus in the family, and/or having genetic health problems.
Height (cm) and weight (kg) are measured with the help of an anthropometric measurement and weighing machine.Their Body Mass Index (BMI) was calculated.According to the BMI value, subjects were categorized as underweight, normal-weight, and over-weight subjects.All the subjects filled up the information data questionnaire and PSQI form.The PSQI is a reliable tool for accurately evaluating a person's sleep patterns, latency, quality, and quantity, among other factors."Good quality sleep" is indicated by a worldwide score of 5. On the other hand, a global score of >5 denotes "poor quality sleep".The student's sleep quality declines as the PSQI score rises.
Results:
A total of 269 female students responded.The baseline characteristics of the students are given in Table 1.Chart 1 reflects the health status of the female participants.Among 269 of the total female college students who participated in the present study, 144 students are out from obesity.So, 53.53% of female students maintain a normal BMI.A total of 62 students (23.05%) are underweight.A total of 54 students (20.07%) are pre-obese.Whereas, 6 students (2.23%) are obesity class I, and 1 student each (0.37% in ease case) belongs to obesity class II and obesity class III respectively.So, the total number of female students under the obesity category is 62 i.e., 23.05%.
Discussion
The main goal of the current study was to determine how BMI affected the quality of sleep among female college students from Tripura who were not from a tribe.There aren't many research projects being carried out in this region of India.Tripura has both tribal and non-tribal inhabitants, hence the health of the tribal community receives more focus.The literature study reveals that there hasn't been any work done on a thorough analysis of the pattern of health distribution and its impact on the sleeping habits of typical (nontribal) female college students.
The result shows that the BMI value (Kg/m 2 ) is within the normal range, but the PSQI values are consistently poor.When the health category according to BMI value was plotted against PSQI value, it is clear from charts that, in most of the cases PSQI value is poor, even in the case of the non-obese female students also.So, even in a person with a normal BMI value, the person can suffer from poor sleep.A study was conducted to investigate the effects of BMI on health behaviors among 334 Chinese college students.The result of that study showed a significant difference between genders and high BMI values were found to be associated with disturbance in sleep ( ).The study concluded that most of the participants come under the category of average internet users.On the contrary, 7.4% of participants were found to have excessive addiction to using the Internet irrespective of the course.The report also added that the level of mental problems like depression, anxiety, etc. differs according to the involvement with the internet.This fact is very important from the educational standpoint of a student.It has been reported in a study that almost 300 million of the population are suffering from depressive moods, worldwide (Friedrich, 2017).The study also added that almost 75% of the depressed population is suffering from the symptoms of insomnia.Daytime sleepiness and lack of concentration are two very common symptoms of an insomniac person.Anxiety disorders are also strongly associated with lack of sleep.
Conclusion
The study found that BMI values do not affect the sleep pattern of the college-going non-tribal female students of Tripura.A person with a normal average BMI value can suffer from sleep-related issues.Sleep quality was found to be consistently poor in all the health categories taken into consideration.Poor sleep quality is found to be a very bad health issue for college students of Tripura, irrespective of sex.
Limitations
The samples under study are suffering from restrictions from the standpoint of age, educational status, as well as community status.The results might be the reflection of the homogeneity of the samples.Future studies should include samples with different age groups, a good comparison with the tribal counterparts.
The study was restricted to the area of the West part of Tripura only.Other regions might also be included.
Chart 1 :
Distribution of obesity among female college studentsChart 2 shows the distribution of PSQI values among different categories of health status.Females also showed poor PSQI values in all categories except in obesity class III.
Chart 2 :
class I Obesity class II Obesity class III Comparison of sleep quality accoring to the health status of the female college students Good sleep quality Poor sleep quality games, which are played either single or in a group.The educational search comes third in the row of interest.Few students have their own YouTube channel to run.So, content-making for the channel is another reason for poor PSQI value.A study was conducted to analyze the internet use pattern among professional students of Tripura (Ghosh and Bhattacherjee, 2020
Table 1 . Baseline health parameters of the subjects under study
Wong.C.A., et.al.;2017).Another cross-sectional study was done by Wang et.al in 2019 with college students to find any probable effect of BMI on sleep quality(Wang et.al.; 2019).The outcome of the study shows that BMI and sleep quality vary with gender.A study was conducted by Meena M et.al (2019) on 230 college students 18-24 years.The result concluded that there is an association between BMI values and sleep duration.In all the cases PSQI values are found to be of poor quality.According to a report from June 2013, almost 190 million internet users are present in India, and most of them are in college and university going population.It was also in the report that the social platforms mostly accessed by the youth are Facebook, WhatsApp, Instagram, and Twitter(Sharma, et.al., 2014).The participants were reported to use the internet mostly during night time i.e., after 11:00 P.M. and it continues up to late night.Most of them were engaged in social networking.As reported by them, surfing the internet is a part of their leisure activity which is possible only at night before going to bed.Next to social networking, comes engagement in The relation is like more involvement and more mental problem.If a person does not have adequate sleep according to age and work pattern, the tendency of suffering from mental problems will increase.Sufficient sleep, especially REM sleep helps the brain to work better regarding processing of signals.Whereas, lack of sleep hinders the process of various brain activities like signal analyzing, proper thinking ability, positive thinking, tiredness, emotional outburst, and ultimately mental and physical health disorders.Good sleep influences good mood, acceptable social behavior, and good interaction with other members of the concerned group.Adequate sleep helps a person to concentrate on what he/she is doing. | 2023-10-20T15:37:26.516Z | 2023-10-11T00:00:00.000 | {
"year": 2023,
"sha1": "1f18f837848b8e1174e9541faf6632b0412bc9fe",
"oa_license": "CCBYSA",
"oa_url": "https://www.ijfmr.com/papers/2023/5/7277.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "32047266155eff872a0765f48d0697a5c534e8ee",
"s2fieldsofstudy": [
"Medicine",
"Sociology"
],
"extfieldsofstudy": []
} |
235918648 | pes2o/s2orc | v3-fos-license | The seed quality of Indonesian cowpea local varieties after storage
The optimum performance of the cowpea plant population in the field could be determined by their seed quality. The research aims to evaluate the seed quality of several cowpea local varieties after being stored in the form of seeds and pods. A total of 18 cowpea local varieties from East Java, West Nusa Tenggara, South Kalimantan, and West Sulawesi were evaluated for their seed vigor and viability after being stored for 30 and 60 days at room temperature using plant material in the form of seeds and pods. The seed multiplication was conducted in the field at Banyuwangi, and the seed quality test using sand media was carried out at the glasshouse of ILETRI Malang. The storage of cowpea in the form of pods and seeds for up to 30 days did not significantly affect the seed quality. Four local varieties (VU 0007, VU 0093, VU 0125, VU 0155) showed good viability at the storage of 30 and 60 days, respectively. The VU 0032 and VU 0076 have optimum viability after being stored for 60 days. The speed of the germination index (SGI) was not only describing the level of vigor but also able to show the type of plant material to be stored. The VU 0007, VU 0093, and VU 0155 were recommended to be stored in the form of seeds, meanwhile, the VU 0125 can be stored for up to 60 days in the form of seeds or pods. The morphological characters of sprouts, namely hypocotyl length, stem dry weight, and root dry weight, could be considered as a benchmark parameter for seed vigor of cowpea.
Introduction
The cowpea (Vigna unguiculata L. Walp) is thought to have originated from Africa [1] and it has now spread and been planted in several regions in Indonesia. One of the problems of cowpea cultivation in Indonesia is the absence of seed producers to produce certified cowpea seeds, hence the farmers get the planting material (seeds) from the market or produce from their cultivation. As a result, the use of those planting materials may cause the optimum plant population in the field cannot be reached, and then it will decrease the plant productivity.
The seed quality can be measured through the seed vigor and viability. Seed viability is a reflection of the ability of the embryo to grow, meanwhile, seed vigor is related to the ability of seeds to grow in sub-optimal conditions [2,3]. In the Phaseolus vulgaris, the use of seeds with a low vigor may result in a 20% decrease in the seed yield [4]. Other research showed that soybean seeds with high and intermediate vigor can compete against weeds, reduce the accumulation of weed dry mass, and produce similar seed yields between weeded and unweeded treatments [5]. This shows that vigorous seeds can compete against weeds. The seed vigor and viability are determined by genetic and physiological factors, including storage conditions [3,6] and it is further explained that the important storage factors are temperature, moisture, seed characteristics, micro-organism geographical location, and storage structure. Long-term storage was reported could reduce the seed viability of cowpea by between 4 and 12% regardless of the temperature and relative humidity of the storage environment [7]. A study in the four cowpea cultivars (BRS Mazagao, UFRR Grão Verde, Pretinho Precoce 1, and BRS Guarib) during 3, 6, and 9 months of storage showed that the BRS Mazagao had better physiological quality up to 9 months of storage, meanwhile other cultivars show a reduction in physiological quality after three months of storage [8]. This suggests that the role of genetic factors also determines the storage period of cowpea genotype. In sorghum, the normal germination and speed germination after stored for 12 months was lower than the ten months storage [9]. The storage materials also affect the seed vigor, that the air-tight glass containers were better than sack containers to maintain the seed vigor and germinability of cowpea under ambient condition [10].
The quality of seed growth in the field is usually predicted through the result of the seed quality testing conducted in the laboratory. A significant correlation has been obtained between the germination performance test in the laboratory and seedling emergence in the field [11]. A study in the seed viability and in vitro shoot regeneration of soybean revealed that shoot induction was positively correlated with seed storage, and the nine months of storage decreased seed germination up to 50% [12]. A study related to the physiological and biochemical factors in cowpea seed showed that cultivars with high percent germination showing high sugar content, suggesting the important use of sugar within the seed germination process [13]. The seed size of cowpea was reported did not affect the seed germination and vigor, but the seedling dry weight was affected by the seed sizes [14]. A study on the effect of seed position on pods on seed viability of several local varieties of cowpea showed that the seeds in the middle and upper part had higher viability than those in the lower part of the pod, and the differences in the seed viability were more determined by the genetic factor [15].
The information on the different patterns of seed quality after being stored in the form of seeds and pods is important to identify the storage period of cowpea seeds for being used as planting material in the next season. The research aims to evaluate the seed quality of several cowpea local varieties after being stored in the form of seeds and pods. The results of this study will provide information about the seed quality of various local varieties of cowpea, and recommendations on tolerable storage period for each local variety.
Methods
The seed multiplication of cowpea local varieties was carried out during the dry season (April to July 2018) in Genteng Research Station, Banyuwangi (East Java, Indonesia) which located at 8°22′44.4″SL and 114°8′45.6″EL, 168 m above sea level, with soil type of Entisol.
Research materials
A total of 18 cowpea local varieties from East Java, West Nusa Tenggara, South Kalimantan, and West Sulawesi were used in this study (table 1).
Seed multiplication
The planting for seed multiplication was conducted in the paddy field after rice planting, and without soil tillage. Each local variety was planted in 5 single rows for 4.0 meters long, 0.75 meters between rows, plant spacing of 40 × 15 cm. Fertilizers of 50 kg/ha Urea, 100 kg/ ha SP36, and 75 kg/ha KCl were applied entirely at the time of planting. The pests, diseases, and weeds were optimally managed. After the plant reached maturity, 200 matured pods were randomly detached from each variety. Pods were dried under the sun on plastic tarps. For pod storage, 100 pods were used. A total of 100 pods were placed in two sealed plastic boxes (represent as two replicates) which contained 50 pods were for each plastic box. For seed storage, seeds from 100 pods were divided by two, and placed in two sealed plastic boxes (represent as two replicates). Those plastic boxes were stored at room temperature in the laboratory of Indonesian Legume and Tuber Crops Research Institute (ILETRI, Malang) for 30 and 60 days.
The seed quality testing
The seed quality test consists of seed viability and Speed of Germination Index (SGI). The seed viability test was using sterilized sand media in the glasshouse of ILETRI, and arranged in a randomized block design with three factors and two replications. The first factor was storage in the form of seeds and pods (storage material, S). The second factor was the storage period at ambient temperature, namely 30 and 60 days (storage time, T). The third factor was 18 cowpea local varieties (V). The seed multiplication in the field and seed quality testing using sand media of 18 cowpea local varieties were presented in figure 1.
The seed quality testing through the viability test was using 25 seeds from each treatment, and it was replicated two times. The percentage of normal and abnormal germination were counted every day. The observation was on ten days after seedling which consisted of seedling height, root length, hypocotyl length, epicotyl length, root dry weight, stem dry weight, and leaf dry weight. The SGI was defined according to the following formula of the Association of Official Seed Analyst [16].
Results and discussion
Analysis of variance of seed quality and seed morphological characters of 18 cowpea local varieties after being stored in the form of seeds and pods for 30 and 60 days showed that there were no significant three-level interactions, that is between storage material × storage time × local variety (table 2). The two-way interaction, namely between storage time × local variety showed significant interactions for seed viability, SGI, hypocotyl length, root dry weight, and stem dry weight. There was no significant interaction between storage material × storage time, as well as between storage material × local variety on all observed characters. The effect of single factors, namely storage material was not significant for all characters, meanwhile, the effect of storage time was significant on the seed viability, SGI, hypocotyl length, root dry weight, stem dry weight, and leaf dry weight. The effect of local varieties was significant for all characters except leaf dry weight. Leaf dry weight (g) ns * ns ns ns ns ns ns = not significant, * = significant at 5 % probability level (p < 0.05), ** = significant at 1 % probability level (p < 0.01), S = storage material, T = storage time, V = local variety.
The mean of seed viability from the seed storage for 30 and 60 days was 75.97% and 78.06, respectively. The mean of seed viability from the pod storage for 30 and 60 days was 77.50% and 73.33%, respectively (table 3). The storage in form of seed as well as pods for 30 days was not affecting the seed viability. However, each local variety of cowpea had shown different seed viability after being stored for 60 days. A study reported that that high germination rates depended largely on seed viability and storage duration, and significantly differed according to genotypes. Furthermore, the seeds storage for more than 3 months had reduced moisture content and decreased germination percentages [12]. The loss in seed viability was reported causing problems in the production and expansion of recalcitrant legumes such as cowpea and soybean, which seed deterioration could be due to poor seed respiration, heating, and possible microbial infections [6,17,18]. Another study reported that soybean seeds deteriorate rapidly, with deterioration rates varying according to storage conditions and initial seed quality in addition to the genotype factor [19].
In this study, when the viability limit uses a minimum of 90%, then there were four local varieties of cowpeas, namely VU 0007, VU 0093, VU 0125, and VU 0155 which were showing high consistency in viability, both stored in the form of seeds and pods. Those varieties were still able to grow optimally until 60 days of storage. Meanwhile, VU 0032 and VU 0076 showed high viability after being stored for 60 days. The VU 0022 in the form of seeds showed 100% viability after being stored for 60 days. On the contrary, VU 0112 achieved high viability if stored in the form of pods for 60 days.
The speed germination index (SGI) reflects the process of rapid seed reactivating for optimum growth when the metabolic process is not inhibited. Seeds that have a high SGI value after undergoing the storage process indicate that the seeds still have high vigor. Four local varieties that had high viability have different SGI characters. The VU 0007 showed slow growth and was better stored in the form of a seed. The VU 0093 and VU 0155 have high SGI values after 60 days of storage and tend to be better when stored as seeds. A similar pattern was found in VU 0125, which has a consistently high SGI value after storage both in the form of seeds and pods. Local varieties with low viability, namely VU 0169 and VU 0173 showed low SGI values in the 30 and 60 days of storage. Based on those facts, SGI was more reflects in the seed vigor. The use of seeds with high vigor levels is important to support the achievement of yield productivity. Simultaneity and uniformity of early plant growth can be achieved [20]. In the arid zone, rapid germination and seed longevity was reported to be varied among species [21]. The difference in origin and also the genetic background of local varieties from several regions in Indonesia not only affect the seed quality and vigor but also affect several morphological characters of seedling ( figure 2a to figure 2g), except that the characteristics of the leaf dry weight. In pinto beans, the seed weight determines the germination rate, germination percentage, and seedling dry weight [22]. Genetic differences not only affect differences in the appearance of morphological characters that determine the seed yield but also affect the variability of seed vigor [22]. A study reported that the reflex of seed vigor in crop performance depended on the genotype and a function of seed vigor level [23].
In this study, the storage time of 30 and 60 days affected the seed quality and vigor, hypocotyl length, and dry weight of stems, roots, and leaves. The average performances of stem length after 30 days of storage were similar among storage materials, but the stem length derived from seed storage was longer than those stored in pods after 60 days of storage. However, the average performance of root length, epicotyl length, and leaf dry weight were not significantly different in terms of the storage time and the storage material. Another study in cowpea also obtained a greater sensitivity to the storage period, with a lower germination percentage and lower vigor, which affect the initial establishment of seedlings in the field through having less developed roots and lower initial biomass accumulation [24]. A study in other legumes such as soybean, it revealed that shoot growth proved to be supported and directly linked to seed quality, age, and genotype [12].
The characters of seed quality and vigor, and the morphological seedling characters (hypocotyl length, root dry weight, stem dry weight) were affected by the interaction between the storage time and local variety. The cowpea local variety with high seed quality, namely VU 0007, VU 0093, VU 0125, and VU 0155 was characterized by the consistent performance of those characters after a storage period of 30 and 60 days in form of seeds and pods. Those characters could be used as measure parameters of the quality of cowpea seeds.
Conclusion
Local varieties of cowpea vary in their shelf life, both in the form of seeds and pods. High seed quality will determine the vigor of the seed. Three local varieties of cowpea, namely VU 0007, VU 0093, and VU 0155 showed high vigor after storage for 60 days in form of seeds. Meanwhile, VU 0125 can be stored for up to 60 days in the form of seeds or pods. The storage of cowpeas for 30 and 60 days affects the variability of hypocotyl length, stem dry weight, and root dry weight of seedling, hence those characters may be used as indicators of cowpea seed vigor. | 2021-07-16T20:06:52.036Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "969786c29c79b2d7b0b9086e70f0e6516ceae28c",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1755-1315/807/4/042010",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "969786c29c79b2d7b0b9086e70f0e6516ceae28c",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Physics"
]
} |
236220271 | pes2o/s2orc | v3-fos-license | European grapevine moth in the Douro region: voltinism and climatic scenarios
The European grapevine moth, Lobesia botrana (Lepidoptera: Tortricidae) is considered to be the main pest in the vineyards of the Douro Demarcated Region (DDR) due to the economic losses it can cause. Damage is caused by the larvae of this pest feeding on grape clusters, rendering them susceptible to Botrytis cinerea in mid-season and leading to the development of primary and secondary rot at harvest. Understanding this pest ́s behaviour in the region under future climate scenarios is an increasing challenge. Hence, the present study aims to assess the potential effects of two likely climate change scenarios (Representative Concentration Pathways, RCP4.5 and RCP8.5) on Lobesia botrana phenology, particularly at the beginning and at the peak of the three Lobesia botrana flights. Our findings show that the phenological events generally occur earlier in all locations and mostly during the long-term period of 2021–2080, being 7 to 12 days in advance in the RCP4.5 scenario, and 15 to 24 days in advance in RCP8.5, when compared to current values (2000–2019) and regardless of the flight number. These results suggest that a fourth complete flight is likely in the future, and that Lobesia botrana will become a tetravoltine species in the region. The flight (male catches) and infestation of Lobesia botrana over periods with daily temperatures above its upper limit of development (> 33 °C) were also analysed during the period 2000–2019 in the targeted sites. The upward trend in the number of days with maximum temperature above 33 °C tended to be accompanied by a decrease in the total number of male catches during the second and third flights, as well as a decrease in the percentage of attacked bunches by the second and third generations. Overall, climate change is expected to influence the phenology of this pest in the DDR.
INTRODUCTION
The European grapevine moth, Lobesia botrana (Denis and Schiffermüller, 1775) (Lepidoptera: Tortricidae) -henceforth referred to as LBis one of the most noxious vineyard pests in Europe and the Mediterranean basin (Delbac et al., 2010;Ioriatti et al., 2011;Caffarra et al., 2012). In South America, it was spotted for the first time in Chile in April 2008 (Gilligan et al., 2011). After that, sightings were also reported in California in 2009 and Argentina in 2010 (Cooper et al., 2014). Lobesia larvae damage grapes by feeding on flowers and berries. However, the greatest economic losses are due to secondary infection provoked by Botrytis cinerea in the feeding sites of LB (Gilligan et al., 2011). LB has a facultative diapause and a variable number of generations per year which depends on two main driving factors: temperature and photoperiod. In general, this moth is trivoltine in Mediterranean latitudes. Nonetheless, a fourth partial flight has been observed during the warmest years, namely in Iberian Peninsula conditions (Martín-Vertedor et al., 2010;Carlos et al., 2018). Voltinism is determined by a conjunction of factors: latitudinal and altitudinal gradients, which largely reflect the thermal forcing conditions of each site (Martín-Vertedor et al., 2010).
In order for a pest control method (e.g., mating disruption, biocontrol agents or chemical treatments) to be efficient, it would need to be applied to pest populations during their most susceptible stages (Amo-Salas et al., 2011). Pheromone traps can be used to monitor the activity of male moths in vineyards, but they require periodic visits to the field for visual observation and to record the number of catches (Ünlü et al., 2019). All this field information, combined with temperature accumulation methods, is critical for implementing accurate phenological models (Riedl et al., 1976;Amo-Salas et al., 2011). Several models have already been developed to predict LB phenology. Some of them are based on the strong relationship between temperature accumulation (degree day (DD)) and pheromone trap catches of adult males (Milonas et al., 2001;Ortega-Lopez et al., 2014;Carlos et al., 2018). Other authors have incorporated additional atmospheric variables (abiotic factors) in their models, such as precipitation, relative humidity or wind speed, as well as biotic factors, namely fecundity and mortality among others (Gutierrez et al., 2012;Gilioli et al., 2016;Castex et al., 2020). From a practical point of view, such as when planning to implement decision support systems for viticulturists, DD models have several noteworthy advantages, but an important one is their relatively low complexity which facilitate their application once duly validated in terms of local conditions (Carlos et al., 2018). This advantage is particularly important in regions with scarce or irregular observational data, particularly regarding the development stages of LB.
According to a recent report from the Intergovernmental Panel on Climate Change (IPCC, 2018), temperatures are likely to increase by approximately 1.5 °C between 2030 and 2052. Within this climatic context, and as already described by Thiéry et al. (2018), LB could indeed benefit from global warming, since the environmental temperature will be closer to its thermal optimum in many wine regions worldwide. Global warming can also indirectly affect its performance by influencing two associated trophic levels: grapevines and natural enemies, such as parasitoids (Reineke and Thiéry, 2016). Furthermore, air temperature is a key environmental factor that triggers the end of the diapause. Milder early springs are expected to promote a significant advancement in the first emergence of adults from hibernating pupae, thus impacting the voltinism of LB (Martín-Vertedor et al., 2010;Reineke and Thiéry, 2016). Lastly, a combination of phenological models of grapevine (Vitis vinifera L.) and LB have demonstrated that an increase in temperature can result in an increase in asynchrony between the larvae resistance of grapevine growth stages and the larvae of the first generation LB (Reineke and Thiéry, 2016). Iltis et al. (2020) have also demonstrated that global warming could adversely impact the reproductive success of LB and the local abundance of this pest.
The Douro Demarcated Region (DDR) is located in northeastern Portugal. Since 2001 the most well-preserved part of this region, the Alto Douro Vinhateiro (ADV), has been a classified UNESCO world heritage site due to its cultural, evolutionary and living landscape (Andresen et al., 2004). It is a world-renowned wine region of high natural value and exceptional biodiversity. Hence, it is of paramount importance to preserve and enhance this heritage. In the DDR, LB usually produces three generations in one year, which can cause damage to grapevine inflorescences and bunches (Carlos et al., 2007). However, in warmer years, a fourth flight has been detected in early September (Carlos et al., 2018), resulting in a fourth generation of damaging larvae during harvest. In terms of the impacts of climate change in the DDR, strong warming and drying trends are projected for the upcoming decades, implying overall shifts in viticultural suitability , as well as early grapevine phenological events (Costa et al., 2019;Reis et al., 2020).
The aim of the present study was to assess climate change impacts on the flight phenology of LB in the DDR, thus providing valuable information to the winegrower for optimising pest control measures within an integrated pest management approach. The study was structured as follows: (1) the use of phenological models to predict LB flights under future climate change scenarios (RCP4.5 and RCP8.5) for the selected study sites and to assess any changes at the beginning of and at peak dates of the three flights, and (2) the analysis of the potential effects of temperatures above an upper limit for development (> 33 °C) on the total number of male catches (second and third flights), as well as on the percentage of bunches attacked (second and third generations), based on the analysis of historical data (2000-2019).
Study area
The climate conditions in the DDR are typical of Mediterranean climates, with characteristically warm dry summers, followed by mild and rainy autumns and winters. Precipitation decreases from the west to the east of the region, with annual precipitation varying from > 1000 mm in the westernmost areas to < 400 mm in the innermost part close to the Spanish border (Fraga et al., 2017). Additionally, DDR is characterised by a high interannual variability in temperature, potential evapotranspiration, total radiation and precipitation (Jones, 2013), and a large diversity of terroirs (Fraga et al., 2018).
The study was carried out on four plots (A, B, C and D) located in two sub-regions of the DDR: Baixo Corgo (BC) and Cima Corgo (CC). Meteorological data from three weather stations (WS: A, B and C/D) near the four plots (< 10 km) for the period 2000-2019 were used. Figure 1 shows the geographical location of the DDR and its sub-regions, along with the locations of the aforementioned plots and WS. The geographical coordinates of all plots and WS are listed in Table 1.
Pest infestation and flight
The percentage of attacked bunches (infestation) and the number of LB males caught in fixed pheromone traps were recorded during the period 2000-2019. The infestation was assessed by randomly inspecting samples of 50 to 100 inflorescences or grapes, depending on the season, during each generation of the insect. For reasons explained later, only the infestation of the second and third generations from Plot B were considered for further analysis.
LB flight models
Degree-day (DD) models to predict main LB flights were developed using data of male catches in sex pheromone traps and temperature recorded over a 20-year period, as described by Carlos et al. (2018). The main LB flights in the DDR were described in terms of DD required for the occurrence of its main events (beginning and peak, i.e. occurrence of 50 % of male catches).
Climate dataset for future scenarios
The Coordinated Regional Downscaling Experiment (CORDEX) aimed to provide an internationally coordinated framework for improving regional climate scenarios (Jacob et al., 2014). The EURO-CORDEX branch (Spinoni et al., 2020). The climate model simulations selected for the present study are listed in Table 2. Gridded T X (maximum temperature) and T N (minimum temperature) for a historical period and a future period (2021-2080) in RCP4.5 and RCP8.5 were used to carry out analyses for the short-term (2021-2040), medium-term (2041-2060) and long-term (2061-2080) periods. The simulated data were available at an 11 km grid resolution and were trimmed for a sector covering the DDR.
As climate simulations may be affected by significant biases with respect to real-world climate, the E-OBS observational dataset was used to calibrate the model data within the DDR sector.
The E-OBS dataset (available at https://www. ecad.eu/download/ensembles/download.php) is a European high resolution and daily gridded observational dataset for average minimum and maximum temperatures and total precipitation (Cornes et al., 2018). The data were developed as part of the ENSEMBLES project (European Union Framework 6) to be used in climate change studies and for the validation of regional climate models (Haylock et al., 2008;Hofstra et al., 2009;Kyselý and Plavcová, 2010). To correct model biases, a quantile mapping approach (Amengual et al., 2011;Miao et al., 2016) was applied to the simulated data using E-OBS as a baseline and the common historical period of 1981-2015 as a reference. In this context, the bias-corrected gridded temperatures (T N and T X ) were extracted from the grid points which were closest to the selected plots. The extracted grid points were E-OBS A and B (41º11'15"N and 7º41'15"W) and E-OBS C/D (41º11'15"N and 7º48'45"W).
Subsequently, linear regression adjustments between WS and E-OBS data were carried out (two linear regressions for each WS, T N and T X ; i.e., six transfer functions in total). The corresponding linear regression equation (transfer function) was then applied to climate model data to adjust the bias-corrected simulations of local climatic conditions. It was assumed that the transfer functions already identified in the historical period between E-OBS and WS would remain mostly unchanged for the future period. This late adjustment was a critical point in the analysis, as the results of the phenology models are very sensitive to the actual temperature observed on a specific site.
Trend analysis and statistical significance
Although linear regression equations are frequently adjusted to time series for the initial detection and estimation of trends (which is a reasonable approach when the coefficient of determination (R-squared) associated with the linear trend is relatively high), climate change trends are frequently non-linear throughout the study period. Hence, the assessment of the statistical significance of trends in a time series should be also carried out using non-parametric hypothesis tests. The Mann-Kendall (MK) is a non-parametric test used to statistically assess whether there is either an upward or downward trend in the parameters of interest (Mann, 1945;Kendall, 1975). The significance of the MK test can be verified with a bilateral test (1) by applying the standardized Z statistics described by Yenigun et al. (2008): (1) which is used to test the null hypothesis (H 0 ).
The positive value of Z indicates an upward trend (|Z| > 0), while a negative value indicates a downward trend (|Z| < 0). To test the increasing or decreasing trend in the significance level α, the H 0 is rejected if the absolute value of Z is greater than Z α /2 (|Z| > Z α /2 ) ( Moreira and Naghettini, 2016). In this study, the MK test was used at a significance level of 0.1 %, p ≤ 0.001 (i.e., confidence level of 99.9 %) for testing monotonic trends at the beginning and peak dates (in DOY) of LB flights (three flights) under RCP4.5 and RCP8.5.
The MK test can be used to detect statistically significant trends, but it does not provide their magnitude. Therefore, the MK test was complemented by Sen's slope estimator, initially proposed by Sen (1968) and described according to Hirsch et al. (1982) by (2) ( 2) where and represent the years within the period 2021-2080 in the i-th and j-th time instant respectively and for both scenarios (RCP4.5 and RCP8.5). Lastly, the magnitude of the trend is estimated by the median of two values of D iy .
Effects of warmer conditions on pest infestation and flight
The potential effects of high temperatures on the number of LB male catches and infestation were also examined. A preliminary analysis was carried out on plot B, as it provided a larger sample of observed data for the percentage of attacked bunches and the total number of male catches. The aforementioned data was related to the number of days with maximum temperature, T X , above 33 °C during the second generation/flight (2000-2019: fifteen years with % of attacked bunches/thirteen years of the total number of catches) and thirdgeneration/flight (2000-2019: eleven years with % of attacked bunches/thirteen years of the total number of catches). The number of days with maximum temperature above 33 °C is counted in the relevant months; i.e., when the second (June-July) and third (July-September) flights/ generations took place. Lastly, the infestation and male LB catches at high temperatures (> 33 °C; i.e., above the upper limit) were also analysed to gain an understanding of the impact of excessively high temperatures on an LB population (Briere and Pracros, 1998).
To predict the impact of future climate scenarios on LB flights, the three non-linear models described by Carlos et al. (2018) for predicting each LB flight were used, with 1 January as the starting point for DD accumulation. Therefore, the models and the corrected T N and T X from 2021-2080 (corresponding WS and plots) for both RCP4.5 and RCP8.5 were automatically run in computational code. The output comprised the simulated days (in Julian days of the year (DOY)) of the beginning and peak (corresponding to 50 % of male catches) of each LB flight.
Correction of model climate data with observations
In order to assess the consistency between E-OBS and local WS data, T N and T X recorded at the different WS (A, B and C/D) and obtained from the E-OBS dataset (A, B and C/D) were correlated. The Pearson product-moment correlation coefficients and the corresponding determination coefficients were estimated. This correlation analysis was also complemented by fitting linear regression lines to the corresponding scatterplot diagrams. For this purpose, E-OBS A and B data were correlated with observed data from WS A and B (at the same location as plots A and B respectively), whereas E-OBS C/D data were correlated with WS C/D (the same location as C and D plots). Hence, a total of six linear regressions (three for T N and three for T X ) were carried out and their respective parameters were estimated following a leastsquares approach.
Overall, statistically significant correlation coefficients at a 99 % confidence level were obtained. Moreover, the correlation coefficients were higher than 0.94 for all regressions, comfirming very robust correlations between both datasets. Accordingly, very high determination coefficients (percentage of variance explained by the linear regression model) were obtained, ranging from 89 % to 98 % (Figure 2 A, B, C, D, E and F).
Although the data pairs tend to be well-aligned in clouds close to the corresponding regression lines, some discrepancies are worth mentioning. This is particularly true for T X in WS B, where several days with maximum temperatures close to 0 ºC were observed in the WS and much higher values were obtained from E-OBS. In terms of T N there were also a few days with similar discrepancies. These discrepancies weaken the linear association between the WS and E-OBS time series. After a more thorough analysis of these specific days, it was found that almost all of them occurred during the winter period (December to February) when strong thermal inversions occurred in the region; these thermal inversions were associated with settled weather conditions driven by strong and nearly-stationary anticyclonic systems located over the Iberian Peninsula (not shown). Such conditions contribute to the formation of thick fog layers in the deep valleys of the region, which can sometimes persist for several days when solar radiation levels are not high enough to dissipate the fog during the day, which is subsequently reinforced at night by mountain breezes and katabatic winds that drain cool air masses to the low-elevation areas of the Douro River basin. The other WS were less prone to these events. In addition, the development and the end of the diapause (Tobin et al., 2001;Tobin et al., 2002) of the majority of multivoltine insects described by Tobin et al. (2008) are driven by temperature, while the beginning of the diapause is driven by photoperiod (Nagarkatti et al., 2001). Under regional conditions, the projected increase in temperature could bring forward the end of the diapause of individuals emerging in spring, with an earlier first flight of LB.
Owing to the aforementioned noteworthy consistency between E-OBS and WS data, these linear regressions were then used to calibrate the climate model data to site conditions over the future period and for both scenarios (RCP4.5 and RCP8.5). This is a common procedure for correcting bias in the mean (location) and variance (scale), which guarantees that local weather conditions will be more accurately represented in both E-OBS and climate model data. Other more advanced methodologies, like quantile mapping approaches, are not recommendable when only short temporal periods are available (small sample sizes), as is the case of the WS time series being analysed here. The intercepts of the linear regression equations show some important biases in the mean, varying from nearly 0.8 to 2.5 ºC (all of them positive), which implies that the WS temperatures are systematically higher than in E-OBS and significant corrections in the means were applied to E-OBS data. The slopes of the linear regression equations vary from approximately 0.97 to 1.04, which means that the variance is similar in both datasets, and only slight corrections were applied to the E-OBS variances.
LB flight
The bias-corrected and site-adjusted daily mean temperatures for the future climatic scenarios (RCP4.5 and RCP8.5) were used as input for the three previously defined models of each flight (Figure 3). The simulated days of the beginning and peak dates of LB male flights for each GCM-RCM model chain (CNRMALADIN, ICHECDMI, IPSLINERIS and MPICLM) were then obtained.
Simulations for future scenarios
The simulated days for the beginning and peak dates of the LB flights were obtained for each combination of location (four), flight (three), climate scenario (two) and climate model (four) separately. Figures 4 and S1 (Supplementary Material) show the box-plot diagrams for RCP4.5 and RCP8.5 respectively, in which all the models and years within a given period are pooled.
Although there is a significant spread within each sub-period (short, medium and long-term) and flight (roughly 35-60 days), there is a clear and consistent trend in earlier beginning and peak dates of the different LB flights and for all plots.
For the sake of succinctness, the equally weighted ensemble averages for the four climate models were calculated, keeping the other parameters separate (scenario, location and flight number).
The ensemble averages were thoroughly analysed in this study, as they are central tendency measures which can be used as an indicator for climate model experiments, which are all considered equally valid. Figures S2 and S3 show the chronograms of the interannual variability of the ensemble averages of the LB beginning and peak flight dates (DOY) during the period 2021-2080 for each location and flight curve and in the RCP4.5 and RCP8.5 scenarios respectively. S3), there is a general shift to earlier beginning and peak flight dates (downward trends) when compared to the present conditions (2000)(2001)(2002)(2003)(2004)(2005)(2006)(2007)(2008)(2009)(2010)(2011)(2012)(2013)(2014)(2015)(2016)(2017)(2018)(2019). Not surprisingly, the trends are generally more significant in RCP8.5 than in RCP4.5, as the former scenario is related to a much stronger radiative anthropogenic forcing, with more accentuated upward trends in temperature. To complement the information in the chronograms, the values of the linear regression trends of the ensemble averages are shown in Table 3.
For all flights and locations (Figures S2 and
For the first LB flight and all locations ( Figures S2 and S3, Table 3), the flight beginning and peak dates in both scenarios (RCP4.5 and RCP8.5) were somewhat in advance. Over one decade, the earliness in both beginning and peak dates varies from 1 to 2 days with respect to the present values in RCP4.5 and up to 3 days in the severest scenario (RCP8.5). For the whole period (60 years), the trend in early beginning of flight is evident for all locations in the RCP4.5 scenario: in plot B it was 8 days early and in plots A and C/D it was 7 days earlier. The flight peak in the same scenario and for all locations was projected as being 10 days At the end of the period of the third LB flight (Figures S2 and S3; Table 3) the beginning of the flights were 11 days earlier for plots A and C/D and 12 days earlier for plot B. The peak flight date was 12 days in advance in all locations. A clear trend can be observed for the most severe climate scenario: beginning of flight in Plot B was 23 days earlier and the flight peak was 24 days earlier.
In general, at the end of the period, the beginning and peak of flights were 11-12 days in advance in RCP4.5 and 22-24 days in the most severe climate scenario.
Therefore, these results show that in both climatic scenarios pest phenological events are projected to occur earlier and voltinism to strengthen in the future.
To better document the inter-model variability, the minimum and maximum DOY for each variable have also been depicted ( Figures S2 and S3). Although the spread among models is apparent, there is no evidence for a change in the model uncertainty throughout the future period.
An assessment was carried out on the statistical significance of the identified trends in LB flight beginning and peak dates for the full period of 2021-2080 in all locations and climatic scenarios (RCP4.5 and RCP8.5). The application of the MK trend test on plot A (Table 4) shows p-values below the 0.1 % (p ≤ 0.001) significance level, regardless of flight and climatic scenario; H 0 can thus be rejected, verifying a significant trend in the analysed series. The Sen´s slope estimator generally varied between -1.11e-01 and -3.75e-01. Similar results were found for the remaining plots (not shown). Therefore, these values confirm the statistical significance of the identified downward trends in both the beginning and peak dates of the LB flights.
Effects of extreme temperatures versus pest infestation and male catches
The likely implications of very warm weather conditions on the LB population are only briefly evaluated here, as the sample size of observed data is very limited. An illustrative analysis was carried out on plot B for percentage of attacked bunches and the total number of adult male catches based on the evolution of these parameters over the period 2000-2019. The results are shown in Figure 5.
For the second generation, an increase in the number of days above 33 °C (June-July) and a decrease of 1.7 % in bunches attacked per year was observed. Moreover, less than 17 male catches were observed in the delta traps on the second flight. For the third generation, the same trend as the second one was observed, with the number of days above 33 °C (July-September) also increasing and the attacked bunches decreasing by 0.8 %, and with 10 catches on the third flight.
DISCUSSION
The results obtained in this study show a considerable warming trend in the studied locations of the DDR in the upcoming decades. The temperature increase projected for the climate scenarios (RCP4.5 and RCP8.5) is in line with those already reported (IPCC, 2018); i.e., an increase of 1.5 °C in the period 2030-2052. In the analysis of the phenological activity of LB, models of three flights previously validated with data exclusively from DDR were applied to the regional future projections.
Our results indicate generally earlier LB flight dates throughout the period 2021-2080 in the two different climatic scenarios RCP4.5 andRCP8.5 compared to the present (2000-2019).
Overall, the LB flights in all locations at the end of the period were 7 to 12 days in advance in RCP4.5, and 15 to 24 days in advance in RCP8.5, regardless of the flight. The earliness of the LB flights in these climatic scenarios is in line with other similar studies (Martín-Vertedor et al., 2010;Caffarra et al., 2012;Reineke and Thiéry, 2016;Taylor et al., 2018). Changes in phenology due to global warming have been demonstrated for pests other than this lepidopteran (Gutierrez et al., 2008;Andrew and Hill, 2017). The significant earliness of the three LB flights in both climatic scenarios (RCP4.5 and RCP8.5) and the period 2021-2080 suggests that a complete fourth flight could happen in the future in the DDR as occured in Spain in 2006 (Martín-Vertedor et al., 2010). Forister and Shapiro (2003) analysed adults of 23 species of Lepidoptera in the Central Valley of California and found that the first flight date had advanced over the past 31 years. Furthermore, Stefanescu et al. (2003) analysed data from a trap in the period 1988-2002 in Spain and reported that the average date of the first flight of 8 species was significantly advanced. In a study conducted in southwest Australia, the flight of Heteronympha merope was found to occur on average 1.5 days earlier per decade over 65 years , owing to an average temperature increase in this period of 0.16 °C per decade (Kearney et al., 2010). The effects of climate change on these species (Lepidoptera) have thus had a visible impact on their phenology (Hufnagel and Kocsis, 2011), which is confirmed by the results observed in this study.
Until the end of this century, the increase in voltinism will be more likely due to warming, since at higher temperatures faster development could lead to additional generations of multivoltine species, as is the case with the grape moth, and as supported by previous studies (Altermatt, 2010;Reineke and Thiéry, 2016). In addition, the increase in temperature may advance the end of the diapause of individuals emerging in spring, with an earlier first flight. On the other hand, over the past few years, due to climate change, the phenological events of grapevine are occuring earlier, specifically budburst, flowering, veraison and ripening. According to Costa et al. (2019), in the DDR, the budburst, flowering and veraison of cv. Touriga Nacional and Touriga Franca are projected to occur 6-8 and 10-12 days earlier respectively until the end of the century. Jones (2007) has found a strong relationship between phenological timings and observed warming, with phenological events occuring 6 to 25 days early in various grapevine varieties and locations.
Other studies have revealed similar findings (Ramos, 2017;Reis et al., 2020). Both LB and Vitis vinifera L. can therefore be expected to undergo significant shifts in their phenology with increasing temperatures, though their interaction is not yet fully understood and requires further research in forthcoming studies.
The results obtained in the present study indicate that there is a relationship between the number of days with maximum temperature above 33°C and the population dynamics observed in the second and third generations/flights, in particular in the precentage of attacked bunches/total number of male captures. According to Woiwod (1997), besides influencing phenology, climate change can also lead to changes in species abundance. Gutierrez et al. (2012) found that LB abundance levels decreased in hot deserts in southern California, where temperatures frequently surpass their upper thermal limit. Another study by Gutierrez et al. (2018) reported that LB levels decreased in dry and warm areas, such as southwestern Spain or Morocco, where high summer temperatures which were near or exceeded the upper limit adversely affected their vital rate. Iltis (2019) suggests that heatwaves can have important implications for the defensive abilities of LB against its natural enemies, predisposing natural populations to attack by larval parasitoids. Moreover, a recent study by the same author (Iltis et al., 2020) suggests that climate change could reduce the abundance of this pest over generations in Eastern France, due to the negative effects of the local warming scenario observed for adult LB lifespan and its reproductive success. Further research should be carried out in the DDR to better assess the effects of excessively high temperatures on LB population dynamics, as in the present study only a preliminary assessment was carried out.
The use of phenological models to predict LB development in the DDR should be improved in future research by, for example, collecting more field data (from eggs and larval stages) to ensure the fine-tuning of model parameters and thresholds to the actual conditions, or by incorporating new abiotic and biotic factors in the model. The use of atmospheric elements, like precipitation, wind speed or relative humidity, could improve model performance and its corresponding prediction capacity, which will be of foremost relevance to viticulturists in the DDR.
CONCLUSIONS
In this study, a new methodology was implemented to evaluate the impact of future climate scenarios in the DDR on LB evolution. Future warming imples a generalised shift to earlier beginning and peak dates of the three LB flights in the studied locations. A fourth complete flight in the future is therefore increasingly likely, LB thus becoming a tetravoltine species, leading to greater voltinism. Conversely, the number of days with excessively high temperatures (i.e., above the upper threshold for development (> 33 °C)), is projected to increase in the future. This will result in a decrease in the total number of male catches in the traps (second and third flights) and the percentage of bunches attacked (second and third generations), which is already being recorded in association with above-optimal temperatures. Therefore, excessively high temperatures could have implications for LB populations and their defensive ability against natural enemies. This could be valuable information for winegrowers in the future to optimise control measures for LB within an integrated pest management approach.
Acknowledgements: This research was funded by the operation nº NORTE-06-3559-FSE-000067 and by the Clim4Vitis project "Climate change impact mitigation for European viticulture: knowledge transfer for an integrated approach", which is funded by European Union's Horizon 2020 Research and Innovation Programme, under grant agreement nº810176. The authors from CITAB were also supported by National Funds by FCT -Portuguese Foundation for Science and Technology, under the project UIDB/04033/2020. Finally, the authors would also like to thank ADVID and its members for providing meteorological data from weather stations and for collaborating in the field data collection, specifically damage assessment and counting of traps. | 2021-07-26T00:06:15.983Z | 2021-06-07T00:00:00.000 | {
"year": 2021,
"sha1": "1513dc3eb026ceaf9fd04c6b00382f85c16fb393",
"oa_license": "CCBY",
"oa_url": "https://oeno-one.eu/article/download/4595/15772",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "c4d31388684bc04508845707718fe640c02f06c8",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Biology"
]
} |
237323764 | pes2o/s2orc | v3-fos-license | Effect of the COVID-19 Lockdown on Air Pollution in the Ostrava Region
A proper estimation of anti-epidemic measures related to the influence of the COVID-19 outbreak on air quality has to deal with filtering out the weather influence on pollution concentrations. The goal of this study was to estimate the effect of anti-epidemic measures at three pollution monitoring stations in the Ostrava region. Meteorological data were clustered into groups with a similar weather pattern, and pollution data were divided into subsets according to weather patterns. Then each subset was evaluated separately. Our estimates showed a 4.1–5.7% decrease in NOx concentrations attributed to lower traffic intensity during the lockdown. The decrease of PM2.5 varied more significantly between monitoring stations. The highest decrease (4.7%) was detected at the traffic monitoring station, while there was no decrease detected at the rural monitoring station, which focuses mainly on domestic heating pollution. The key result of the study was the development of an analytical method that is able to take into account the effect of meteorological conditions. The method is much simpler and easy to replicate as an alternative to other published methods.
Introduction
Ostrava is the third largest city of the Czech Republic. With a population of approximately 300,000 inhabitants, it is the center of a densely inhabited industrial region with more than a million inhabitants, which directly borders Polish Rybnik and Katowice industrial regions. The industrial character of the Upper Silesia region was determined by rich hard-coal deposits, which have been mined here since the 18th century. The presence of coal deposits allowed the growth of industries that use coal as an energy source or feedstock, such as steel production, electricity generation, chemical industry, etc., and downstream industries like machine industry. The presence of employment opportunities also led to the quick population growth during the 19th century and most of the 20th century.
The high population density, combined with the presence of energy and feedstock intensive industries, resulted in serious environmental problems, including air quality, which peaked in the 1970s [1]. There has been an intensive effort to remedy the environmental problems of the Ostrava region, which has brought significant improvements. However, the region remains the most air polluted part of the Czech Republic, and the Czech-Polish Upper Silesian industrial region is one of the most air polluted regions of the European Union.
Air Pollution
Air pollution is caused by the emission of chemical substances into the atmosphere. Pollutants are chemical substances that have an observed negative effect on any part of the environment, i.e., ecosystems, the Earth's climate, human health or properties. Pollutants can be divided into primary pollutants, which are directly emitted into the atmosphere, and secondary pollutants, which are products of chemical and photochemical reactions in the atmosphere [2,3].
The standard classification divides pollution sources into two groups according to their origin, namely, anthropogenic sources and natural sources. Natural sources of air pollution are natural biotic or abiotic processes such as forest fires, lightning, sea spray, pollen and mold transmission, decomposition processes, etc. Anthropogenic pollution is the result of human activities and can therefore be reduced or prevented. Human activities are concentrated in populated areas, so air pollution is also usually most severe in areas with a high population density, mainly as a result of certain activities, i.e., fuel burning, material processing, energy-intensive processes, etc.
Commonly monitored air pollutants are sulphur dioxide, carbon mono-and dioxide, nitrogen oxides, benzo[a]pyrene, and other polyaromatic hydrocarbons [4]. Their most common source are thermal processes. Nitric oxide (NO) and nitrogen dioxide (NO 2 ) are components of nitrogen oxides (NO x ). Their concentrations in urban areas are currently mainly associated with car traffic [5]. Most NO x emissions consist of NO. In the free atmosphere, NO reacts with oxygen and hydroperoxy radical (HO 2 • ) in the air into NO 2 . Particulates are another common air pollutant. Particulate matter (PM x ) is usually monitored by its diameter fraction: coarse-PM 10 , fine-PM 2.5 and ultra fine-PM 1 particles. These fractions are defined as particles with a diameter of ≤10 µm, ≤2.5 µm and ≤1 µm. The physical and chemical composition of particulate pollution is very diverse. PM x consists of a combination of mineral particles, soot, bacteria, pollen, mold, salts, organic materials, etc. [2].
The study focuses on two pollutants. PM 2.5 is one of the most significant air pollutants in the region. Its sources are highly spatially variable through the Ostrava region. It can represent different kinds of pollution at different sites. NO x were selected to represent car traffic pollution in the region.
Air Pollution Monitoring
The air pollution monitoring network of the Czech Republic is run by the Czech Hydrometeorological Institute (CHMI). The CHMI operates a majority of monitoring stations, collects measurement data from other organisations and publishes them. The legislative norm defining such activities is Act No. 201/2012 Coll. on air protection as amended ('Air Protection Act'). There are currently 198 air pollution monitoring stations in the Czech Republic, 127 of which are run by the CHMI and 71 stations are run by other governmental organisations and private companies. According to the EU legislation [6], monitored pollutants are PM 10 , PM 2.5 , SO 2 , NO x , NO 2 , CO, benzene, benzo[a]pyrene, and toxic metals in aerosol (Pb, Cd, As, Hg). The monitoring network is the densest in regions where increased air pollution occurred historically, including the Ostrava region. In 2019, 12 monitoring stations were operated in Ostrava, and 36 monitoring stations were run in the Ostrava region [7].
The density of the monitoring network enables the selection of specific monitoring stations covering different aspects of air pollution in the region.
Air Quality Management
The city of Ostrava forms part of the Ostrava/Karviná/Frýdek-Místek conurbation, which is the subject of the Czech Environment Ministry's Air Quality Improvement Program. According to § 9 par. 4 of Act No. 201/2012 Coll. on air protection, the aim is to achieve the required air pollutant levels as soon as possible, and then to maintain and improve air quality throughout the conurbation. Participating organizations must proceed in accordance with legislative requirements. As a participant in the program, Ostrava implements regular short-term air quality improvement programs, incorporating the Action Plan for reducing air pollution within the city. The third update of the Action Plan (2017) is currently being implemented.
On 22 September 2020, a document, which is an updated air quality improvement program for the agglomeration of Ostrava/Karviná/Frýdek-Místek-CZ08A for the period 2020+, was approved. The 2020+ program was preceded by the air quality improvement program for the agglomeration Ostrava/Karviná/Frýdek-Místek-CZ08A of 14 April 2016, file no.:23967/ENV/16, which was issued in accordance with the Air Protection Act as amended on 14 April 2016 in the form of measures of a general nature [8].
The Action Plan of the Statutory City of Ostrava for the Implementation of the Air Quality Improvement Program of the Ostrava/Karviná/Frýdek-Místek Agglomeration -CZ08A contains specific information on activities/projects. It also incorporates a schedule for the implementation of relevant activities and partial steps of individual projects, financial demands, financial coverage, deadlines, and internal responsibilities. In May 2020, on the basis of a contract concluded with the State Environmental Fund of the Czech Republic, a report on the fulfilment of the proposed activities was prepared [9].
On 31 August 2015, the project "Sustainable Mobility Plan Ostrava" was successfully completed. In 2018-2019, a gradual evaluation of the fulfilment of individual tasks of the Action Plans and the fulfilment of individual items of project reservoirs took place. In 2021, work is underway to update the key tasks of the Action Plans. The Sustainable Mobility Plan is a strategic document designed to meet the mobility needs of people and businesses in and around cities, and to ensure a better quality of life. It is a way of tackling transport problems in urban areas more effectively. The aim of the Sustainable Urban Mobility Plan is to create a sustainable urban transport system with at least the following objectives: Ensure that the accessibility offered by the transport system is available to all; Improve transport safety; Reduce air pollution, noise pollution, greenhouse gas emissions and energy consumption; Improve the efficiency and economy of passenger and freight transport; Contribute to improving the attractiveness and quality of the urban environment and urban design [10].
Another CLAIRO project (Clean Air and Climate Adaptation in Ostrava and Other Cities) by the Silesian University in Opava as one of the main partners is implemented by the city of Ostrava. The project aims to systematically reduce air pollution by planting suitable greenery with a proven ability to absorb air pollutants from various sources [11].
COVID-19 Disease
In December 2019, a cluster of acute respiratory diseases, now known as novel coronavirus-infected pneumonia, occurred for the first time in the Wuhan district, Hubei Province of the People's Republic of China [12]. The analysis of samples from affected patients revealed that their symptoms were caused by a coronavirus, later named severe acute respiratory syndrome (SARS) coronavirus (CoV) 2 (SARS-CoV-2) [13]. Coronaviruses belong to Coronaviridae, which is a family of RNA viruses [14]. Within five months, the disease affected more than 210 countries. In March 2020, the World Health Organization announced that the spread was a global pandemic [15]. Due to the high degree of uncertainty about containing the virus, many countries imposed national measures focused on restrictions to day-to-day life. In the Czech Republic, the first three cases of the disease were confirmed on 1 March 2020. The first wave of the epidemic in the Czech Republic culminated around 12 April 2020, when a total of 4750 people infected with COVID-19 were registered (Figure 1), 436 of whom were hospitalized, including about a hundred patients in serious condition. Thereafter, the number of recovered patients began to outweigh the number of newly infected, and the number of hospitalized patients also declined. The number of people in the Czech Republic with a positive test for COVID-19 stabilized at 2000-2500 during May and June 2020 [16]. In the spring of 2020, the national lockdown included the closure of restaurants, nonessential shops, gyms, and swimming pools. Travelling was limited to essential shopping, work purposes, and taking care of close relatives. Some exceptions were allowed, such as going outside for exercise or spending time in nature [17].
COVID-19 Disease Influence on Air Pollution
The situation caused by the spread of the SARS-CoV-2 virus disease and the resulting pandemic significantly affected the social and economic activities of society across the world. A large number of studies is devoted to the influence of air pollution on the spread or consequences of the COVID-19 disease [12].
This unique situation is a suitable moment for the assessment of the influence and spatial-temporal distribution of anthropogenic pollution.
To reduce pandemic consequences, there were applied different restrictions and regulations, such as restrictions on the free movement of persons, medical face masks, restrictions on social activities, restrictions on sports activities, restrictions on trade and services, restrictions on emergency medical care, population testing, and the closure of borders (states, regions, municipalities). The measures could lead to changes in the spatialtemporal distribution of air pollution concentrations. The given changes are presumed to have a mostly local character due to anthropogenic emissions.
There are a number of studies that investigated the dynamics of air pollution in relation to the COVID-19 disease. These studies can be classified according to the analysis used. A significant number of studies focus on comparing the pre-pandemic situation and the pandemic situation when measures and lockdowns were applied [18][19][20][21][22]. This approach is not entirely correct. It is necessary to bear in mind the variability of meteorological conditions.
Another group of studies used more sophisticated methods that incorporated the influence of factors such as meteorological conditions (precipitation, temperature, atmospheric stability, pressure, etc.) [23]. The factor of meteorological conditions is significant, especially in regions with a changeable character of the weather. The study performed by Beloconi et al. [24] presents the results of the Bayesian spatio-temporal (BST) model, which was developed to assess changes in NO 2 and PM 2.5 concentrations in Europe. Factors describing land and vegetation cover, impermeability, settlements, terrain, transport, Normalized Difference Vegetation Index (NDVI) and other remote sensing data sources, humidity, meteorological data, and dust were included in the modeling. The research presented by Bekbulat et al. [25] shows an approach that focuses on the comparison of air pollution data describing specific weeks. Pairs for the comparison were chosen based on the similarity of weather conditions during those weeks. The weather situation was taken into account in the study performed by Baldasano [26].
Studies that focused on the effect of the COVID-19 lockdown on air quality did not take meteorological factors into account or used advanced analytical tools that would be highly difficult to replicate on different datasets. A simpler and easily replicable method would be handful in regions where meteorological conditions are highly variable and simple year-over-year comparisons cannot provide proper answers.
Methods and Data
The late winter and spring weather in Central Europe is highly variable. The region lies between the still cold mass of Scandinavia and the already warm and quickly warming Mediterranean region, between the dry continental air in the east and the humid air of the North sea region. Any weather front, any movement of cyclones and anticyclones can result in a rapid weather change. For this reason, the spring weather can significantly differ from year to year. Any air quality analysis and/or comparison should filter out the weather influence.
Therefore, the analysis was divided into three logical steps: • Clustering of meteorological data; • Testing of the pollution difference in each weather cluster; • Estimation of the lockdown effect on air quality.
All data processing, analysis and graphs in the study were performed in the Python 3.7 programming language. Pandas and numpy modules were used for data processing, the scipy module was used for statistical analysis, the scikit-learn module was used for k-means clustering, and the matlibplot and seaborn modules were used for graph generation. All the modules above are part of Python's Anaconda distribution [27].
Cluster Analysis
K-means clustering is a well known clustering method. Its goal is to make a partition of n data vectors into k cluster subsets. Each cluster is represented by a centroid vector, and each data vector is placed to the cluster where the nearest representative centroid is located. The default option of the distance measurement is the Euclidean metric, however, other metrics can also be used. Cluster centroids divide the vector space into Voronoi cells ( Figure 2). K-means clustering is an optimisation problem that involves searching for the positions of cluster centroids to minimize the inertia of the cluster model. Inertia is defined as the sum of distance squares between data vectors and their corresponding cluster centroids.
K-means clustering is known to be an NP-hard problem [29]. There is no computational algorithm that would solve the optimisation problem of k-means in reasonable time. However, there are iteration algorithms that enable quick computations to find the local minima of inertia. The global minimum is estimated by running the iteration algorithm several times with different initial approximations and selecting the best performing result [30].
Analysis of Variance Testing
Analysis of Variance (ANOVA) is a class of statistical models designed to analyze the difference between the means of datasets. ANOVA tests compare several datasets with each other and examine whether there are statistically significant differences among their means. A special case of ANOVA tests is the t-test, which compares the difference between the means of two datasets [31].
Pollution Monitoring Data
There were two pollutants selected for analysis, i.e., PM 2.5 and NO x . The PM 10 concentrations were omitted because they highly correlate with PM 2.5 . The correlation between PM 2.5 and PM 10 is greater than 0.97 at all analyzed monitoring stations, and the analysis of PM 10 would not bring new insights. The concentrations of NO x and NO 2 are also strongly correlated (0.84-0.86) at all analyzed monitoring stations. We selected the NO x for analysis since they better reflect car traffic pollution, mainly at traffic monitoring sites where a large part of NO has not yet reacted with oxygen to form NO 2 .
Other main pollutants (SO 2 , benzo[a]pyrene) that were measured at monitoring stations were not used due to data quality. SO 2 concentrations are commonly nearby or below the detection limit of monitoring devices, while benzo[a]pyrene concentrations are determined from 6-day samples and the data do not have sufficient time resolution.
Three pollution monitoring stations in the Ostrava region were selected for analysis. Each station represents a different source of air pollution ( Figure 3, Table 1). A more detailed description of the selected monitoring stations is available in Appendix A.
Monitoring Station Categorization
Ostrava-Českobratrská Traffic/Urban/Commercial-Residental Ostrava-Přívoz Industrial/Urban/Industrial-Residental Věřňovice Background/Rural/Residental-Agricultural All three selected stations measured both PM 2.5 and NO x concentrations. All monitoring stations measured the pollutants at 1 h intervals. Hourly data for 2020 were not yet published, but were kindly provided for the study by the Czech Hydrometeorological Institute. The hourly data were used to calculate daily averages, which were later utilized in the study. The brief exploratory statistics of pollution data are shown in Table 2.
Meteorological Data
Meteorological data used in this study were measured at the meteorological station of the Leoš Janáček Airport Ostrava (LKMT). The meteorological station is located in the open field of the airport and is not affected by local conditions (e.g., buildings in the vicinity). Therefore, the station well represents general meteorological conditions in the wide valley of the Moravian Gate, which also includes Ostrava ( Figure 4).
The data were downloaded from the Weather Underground website [32] as a set of measurements covering the whole time period of the study. Measurements were taken at 30 min intervals. The time period is from 1 February 2019 to 30 June 2019 and from 1 February 2020 to 30 June 2020. 14,352 meteorological measurements were processed. Each measurement consisted of date, time, temperature, humidity, wind direction, wind speed, and atmospheric pressure. All meteorological variables were used to calculate daily averages. The only exception was the wind speed data, in which the fractions of each wind direction category were calculated. The wind direction data were categorized into 18 categories. There were 16 categories representing wind direction. The remaining two categories were calm wind and variable wind. The brief exploratory statistics are shown in Table 3 and Figure 5 (Numerical values of the wind direction statistics are available in Appendix B). The day-of-year variable was also included in the meteorological data. This variable represented the intensity of insolation, which changes rapidly in late winter and spring around 50N latitude, at which the study was performed.
Meteorological Data Clustering
The meteorological data were grouped using the k-means algorithm into clusters with similar weather conditions. During the process, two issues had to be solved: • Data normalization; • Number of clusters; The data needed to be normalized. Without the normalization, clustering would be disproportionally weighted towards the parameters with the highest numerical differences. A uniform normalization was selected. Each meteorological parameter was linearly transformed into the [0, 1] interval by the formula: The most suitable number of clusters was determined by the experiment. There was clustering performed on the meteorological data for the number of clusters ranging from 2 to 40. The optimal number of clusters was selected by the analysis of the inertia of cluster models.
The first derivative of the function I, which captured the dependence of inertia on the number of clusters (visualized in the graph above Figure 6), was approximated by the central difference formula: We decided to swap the order of the I function values in the formula to get positive values of the first derivative.
The second derivative of the function I was approximated by the following difference formula: According to the second derivative approximation, the function I can be split into two parts. In the first part, the function shows a significant improvement in clustering performance. The second part of the function shows only a slight linear improvement in clustering performance. This is the reason why we selected the split value 7 as the optimal number of clusters for k-means clustering. The centroids of each cluster are shown in Table 4.
Variance Analysis of Pollution Data
All pollution measurements used in the study were categorized according to the weather cluster they belong to. The next step in the analysis was one-way ANOVA tests performed on each unique dataset defined by a combination of pollutant, monitoring station, and weather cluster. The data subsets were tested for the significance of the lockdown occurrence with α = 0.1. The results of the analysis are visualized in Table A2 and
Lockdown Effect on Air Pollution Estimation
The ANOVA test results were used to estimate pollution concentrations if no lockdown was put into action. Each measurement can be categorized by a unique combination of monitoring station, measured pollutant, and weather cluster. If the ANOVA test found no significant difference for such a category, the measurement remained intact. If the ANOVA test found a significant difference, the measurement was multiplied by the division of the mean value out of lockdown and the mean value in lockdown. The change in the mean value for the February-June period is summarized in Table 5:
Discussion
There is a significant influence of weather conditions on air pollution, which needs to be accounted when comparing pollution monitoring results. The authors of the study decided to deal with this issue by dividing weather conditions into naturally occurring clusters of similar weather conditions. The k-means algorithm detected 7 such clusters from the meteorological dataset from 1 February 2019 to 30 June 2019 and from 1 February 2020 to 30 June 2020.
Each day of the dataset was labeled by the k-means model, and the labelling was used to divide the NO x and PM 2.5 pollution data into 7 datasets. Each dataset was tested whether there exists a significant difference between the concentrations measured during the active anti-epidemic in the first wave of the COVID-19 lockdown and the concentrations measured during ordinary social and economic activities. The analysis results made it possible to calculate an estimate of the pollution difference caused by anti-epidemic measures.
The results showed a 4.1-6.7% decrease in the NO x concentrations, probably mainly due to lower traffic intensity. The decrease of PM 2.5 was not significant at all monitoring sites. The highest decrease was estimated at the Ostrava-Českobratrská traffic station (4.7%). The Věřňovice monitoring station, which mainly focuses on domestic heating pollution, showed no significant effect of anti-epidemic measures on the PM 2.5 concentrations.
The results of the study are consistent with the results of the study [24], which assessed air pollution throughout the European Union. The article also found a more significant decline in NO 2 pollution and a lower decline of PM 2.5 .
The study [19] found a higher decline in air pollution. The steeper decline can be attributed to the fact that India is a developing country with a lower level of pollution prevention. This means that any decline in economic and social activities has a greater effect on overall pollution.
The study [20] found a 16% decrease of NO 2 and a 19% decrease of PM 2.5 in California when compared with the monitoring results over the previous 5-year period. The effect of weather conditions was not filtered out in this study. Therefore, it is not clear how much of the decline can be attributed to weather conditions and how much can be attributed to anti-epidemic measures.
The study [25] compared air pollution monitoring results across the USA. The study also found a slight decline in air pollution during the COVID-19 lockdown.
Policy Implications
There are several studies that analyzed air quality in the Ostrava region [5,[8][9][10], and their results were used as foundations for policy recommendations, which were gradually implemented. Those studies were based on air pollution dispersion modeling using various models. The COVID-19 lockdown made it possible to study real world effects of emission changes, which is a way to verify model results.
All air pollution studies consider car traffic to be the main source of NO x and NO 2 pollution in urban areas of the Ostrava region. This result was verified by our study since the decline of NO x at all monitoring sites coincides with the decline in car traffic during the lockdown, when most of the population switched to home office work schedules. Another key source of NO x pollution, industrial sources (electricity production, heat production, steel making), continued their production almost uninterrupted.
The analysis of PM 2.5 pollution confirmed the importance of air pollution from domestic heating, which is the main source of PM 2.5 in the region. The greatest decline in the PM 2.5 concentrations was observed at the Ostrava-Českobratrská traffic station, which can be attributed to the decline in car traffic. No decline in the PM 2.5 concentrations at the Věřňovice station well coincides with the fact that domestic heating was not affected by the COVID-19 lockdown.
The verification of air pollution dispersion models further strengthens arguments for measures that are currently adopted to reduce emissions from domestic heating and car transport. There is a program run by the Moravian-Silesian regional government, which stimulates the replacement of old stoves using solid fuels by low-emission or emission-free substitutes. The program is aimed at the financing of replacements via direct subsidies and cheap financing [9].
The results of our study also confirm the importance of all measures taken to lower emissions from car traffic. This has been implemented in several programs aimed at traffic intensity reduction (support of public transport, cycling, ride-sharing, etc.), as well as a long-term switch to low-emission or emission-free vehicles. For example, public transport in Ostrava is now mostly electrified (trains, trams, trolleybuses, electric buses), while the rest have switched to low-emission CNG fueled buses [10].
Conclusions
We managed to develop an algorithm that filters out the effect of weather conditions on pollution concentrations to analyze pairs of datasets coming from the same monitoring station. The algorithm was applied to compare data measured during the active antiepidemic in the first wave of the COVID-19 lockdown and during ordinary social and economic activities. However, the algorithm is generally applicable on different situations.
The algorithm itself is the main result provided by the study since it allows taking weather conditions into account, while being much simpler and easier to apply when compared to other published methods.
The concentration estimates without the effect of anti-epidemic measures can be further improved by using a more advanced estimation technique applied to each data subset, which is defined by meteorological data clustering. Multidimensional statistical regression or artificial neural network estimates can be applied.
The results of the study confirm the results of air quality studies based on air pollution dispersion modeling and support recommended policies focused on lowering emissions from domestic heating and car traffic.
Conflicts of Interest:
The authors declare no conflict of interest.
Appendix A. Pollution Monitoring Stations
Appendix A. 1
. Monitoring Station Ostrava-čEskobratrská
The Ostrava-Českobratrská station is located in the street canyon ofČeskobratrská street in the center of Ostrava. The station is designed to be a traffic hot-spot station and was selected because their data well represent car traffic pollution. Traffic intensity iň Ceskobratrská street was 19,081 cars, 1645 trucks and buses and 49 motocycles during the last published traffic survey in 2016 [33]. Traffic inČeskobratrská street is regulated by traffic lights, which results in increased emissions from traffic due to the start-stop behaviour of traffic flow.
Abbreviations
The following abbreviations are used in this manuscript: | 2021-08-10T13:30:28.660Z | 2021-08-01T00:00:00.000 | {
"year": 2021,
"sha1": "6590f7b9a58a6b91547c6a6a03dfae8166b56c5a",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1660-4601/18/16/8265/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3b0b3f405505d8c8b9064748a1f8f9a6196b65b7",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
250133897 | pes2o/s2orc | v3-fos-license | Seromucinous Cystadenoma Presenting as Endometriosis Complications in a 57-Year-Old Female: A Case Report
Endometriosis should be considered when a female patient reports symptoms of severe pain/tenderness in the pelvic area associated with a frequent need for urination, bloating, vomiting, or nausea. Clinical suspicion is increased if the patient has a history of endometriosis. However, many patients with endometriosis can be asymptomatic, which is why physicians and providers must keep an open mind and have a broad differential. Examinations that aid in the diagnosis of endometriosis include but are not limited to a pelvic examination, an ultrasound, magnetic resonance imaging (MRI), and an exploratory laparoscopy. In this case study, we present a 57-year-old postmenopausal female patient who presented to her obstetrics and gynecology (OBGYN) physician with hot flashes and an abnormal ultrasound revealing an ovarian cyst. Seventeen years prior, at the age of 40, the patient was found to have endometriosis and endometrial polyps and underwent a left oophorectomy. Due to the patient’s history, symptoms, and current scans, it was assumed that the present cyst was a complication of endometriosis. Ultimately, the cyst, right ovarian cyst wall, right fallopian tube, and uterine fibroids were surgically removed and sent to pathology. Upon further review of the patient’s pathology reports, it was found that the cyst removed was a seromucinous cyst with focal borderline features.
Introduction
Endometriosis is a common and chronic gynecological disease that impacts the pelvic organs of roughly 10% of women of reproductive age, with the ovary being one of the most affected organs [1]. Ovarian endometriomas (OEs), or ovarian endometriosis cysts, are present in 17%-44% of women with endometriosis [1,2]. Moreover, the recurrence rate of such cysts is reported to be up to 50%, creating major concerns for patients with them [3]. The symptoms and signs of OEs often include pain and tenderness in the pelvic area, frequent urination, vomiting, and nausea [4,5]. Some women are asymptomatic, making some OEs challenging to diagnose [4].
The treatment of endometriosis is vast but usually includes ultrasound-guided aspiration and various surgical procedures such as laparoscopy [2]. Many cases are treated empirically by suppressing ovulation and estrogen production. To date, laparoscopic cystectomy seems to be the most effective treatment for endometriosis [2].
Due to the nature in which endometriosis manifests, it may be misdiagnosed initially. Its symptoms are very similar to other ailments, which makes the differential diagnosis list expansive. The various differential diagnoses include functional cyst, ovarian abscess, serous cystadenoma, epithelial carcinoma, germ cell tumor, pelvic inflammatory disease, and ectopic pregnancy [4]. There are also various non-gynecological diagnoses such as appendicitis, diverticulitis, and/or urinary tract infection (UTI) [4].
Ovarian cysts of various extents of malignancy are often connected to an initial endometriosis diagnosis [6]. One such indeterminate malignancy is seromucinous borderline tumors (SMBTs). These tumors have low-grade malignant potential and include aspects of both serous and mucinous tumors [7].
The average age of patients with SMBTs is 33-44, and 30%-50% of such tumors are associated with endometriosis [8]. Diagnosing an SMBT can be difficult because of its similar immunohistochemical expression patterns to OEs [9]. Moreover, studies have attempted to improve the diagnostic distinction between ovarian carcinomas and gastrointestinal metastases [10]. SMBTs have previously been connected to somatic AT-rich interactive domain 1A gene (ARID1A), a tumor suppressor gene mutation, and ARID1A protein loss [8]. This connection is valuable to establish because ARID1A mutations are identified in atypical endometriosis and endometriosis-related carcinomas, meaning such genetic mutations suggest an early sign of malignant transformation [8]. Furthermore, cases of SMBTs have been found to be correlated with genetic mutations in Kirsten rat sarcoma virus (KRAS), while having no substantial connection to phosphatase and tensin homolog (PTEN) mutations [8,11]. Understanding the immunohistochemistry of SMBTs, and endometrioid carcinomas, can allow clinicians to identify patients at a higher risk of carcinogenesis [8]. SMBTs generally have good patient outcomes; thus, correctly diagnosing the tumor, and distinguishing it from OEs and other more aggressive carcinomas, is important for the proper treatment and reduction of aggressive therapies [7].
This article was previously presented as a poster at the 2022 Symposium University Research and Creative Expression (SOURCE) on April 29, 2022.
Case Presentation
A 57-year-old postmenopausal nulligravida female presented to her obstetrics and gynecology (OBGYN) physician with complaints of hot flashes and to counsel for an abnormality found in an ultrasound two months prior that revealed a right adnexal visualized cystic lesion.
She had a family history of stroke, which her mother was diagnosed with at the age of 67. Her social history indicated nothing of significance as to diet, exercise, and stress levels. Her past medical history was significant for endometriosis with endometrial polyps and fibroids and surgical history of a left oophorectomy 17 years prior. Physical examination of the pelvis and genitalia revealed palpable cystocele and rectocele. Laboratory work including CBC, urine analysis, hormone evaluation, Pap smear, and tumor markers (CA-125 and hCG) were all negative and within the normal range. The only abnormality was a slightly elevated HbA1c level.
A pelvic MRI was done a month after the appointment. The test revealed a right ovarian cyst that was slightly larger in comparison to previous ultrasounds conducted. The patient was recommended to have the cyst removed laparoscopically.
The patient was reluctant to undergo another surgery for the treatment of the right ovarian cyst, citing that she had no pain or other symptoms from the cyst. Ultimately, the patient went to see an endometriosis specialist two years later.
The differential diagnosis for a patient with a history of endometriosis and a visualized cyst on an ovary is consistent with an endometrioma.
The patient underwent additional transabdominal and transvaginal ultrasounds of the pelvis one year after the initial presentation. These findings included an anterior myoma at the fundus measuring 3 × 2.5 × 3.2 cm, a left anterior myoma measuring 1.8 × 1.7 × 2.2 cm, and a fundal subserosal myoma measuring 4.3 × 3.7 × 3.6 cm. A right ovarian cyst measuring 6.3 × 4.9 × 6.8 cm and a smaller cyst measuring 1.3 × 0.7 × 1.1 cm were also found. The physician noted that the cyst of the right ovary was larger compared to previous ultrasound findings. The patient also underwent an MRI scan of the pelvis with and without intravenous gadolinium two years after initial presentation. These findings included a complex right ovarian cyst measuring 6.8 × 6 × 6.5 cm and a right fundal exophytic fibroid measuring 5.3 × 4.5 × 4.4 cm, which was found to be stable. Both sets of imaging (MRI and ultrasound) were found to be consistent with an endometrioma.
The patient's preoperative diagnosis was a complex right ovarian cyst. The patient was scheduled for a laparoscopic right cystectomy and oophorectomy, right salpingectomy, removal of adhesions, and myomectomy. The procedure was conducted under general anesthesia. Upon pelvic visualization, the uterus was noted to be irregularly enlarged with one 6 × 7 cm fundal fibroid (Figure 1). Further inspection revealed multiple adhesions of the right ovary, composed of smaller intramural and subserosal fibroids. The location of the ovarian cyst was noted to be superior to the pelvic brim. The cyst was measured to be 6 × 8 cm with a very thick capsule. The excision of the fundal fibroid and the removal of the adhesions were conducted prior to the removal of the ovarian cyst ( Figure 2). The right fallopian tube was also excised. Following procedure termination, the patient was brought to the recovery room in stable condition. Specimens sent to pathology included the right ovary, right ovarian cyst, right fallopian tube, uterine fibroid, and adhesions. Intraoperative diagnoses were a right ovarian cyst, fibroid uterus, and pelvic adhesions. Postoperative diagnoses from pathology reported a 5 × 4 × 2 cm seromucinous cystadenoma, with focal borderline features. Uterine fibroid samples were found to be benign leiomyoma, and the samples from the fallopian tube were deemed unremarkable. The preferred therapy for a cyst (suspected to be a complication of endometriosis) is surgical removal [12]. In this case, the surgeon performed a laparoscopy and was able to conduct a right salpingectomy, right ovarian cystectomy, and right oophorectomy. Following the operation, the patient was kept overnight at the hospital in order to continuously monitor her vitals. The patient was discharged the following day, and a follow-up appointment was set for one week after the surgery. At the follow-up, the patient was noted to be doing well and healing/recovering adequately.
Discussion
Symptoms of endometriosis may present in various ways that mimic other conditions, making its diagnosis and treatment pertinent to patient health. Its symptoms include pain or pressure in the lower abdomen and severe acute pain along with nausea and vomiting if the cyst has ruptured [4]. In order to diagnose endometriosis, the clinician may use health history, physical examinations, blood work, and imaging such as MRI or CT scans, as used in the case presented [4]. Recurrence of endometriomas is another problem that arises with the treatment of endometriosis. To properly manage the disease, a prediction of the patient's risk must be determined, as this will ensure proper treatment is being implemented for the patient [3]. One of the more frequent diagnoses associated with endometriosis is the borderline seromucinous cyst. Due to its similarity in presentation, many cases of borderline seromucinous cysts go undiagnosed or misdiagnosed, furthering its course [9].
Borderline seromucinous cysts present in manners similar to endometriosis, sharing similar symptoms as well. In the case presented, the patient was believed to have a recurrent case of endometriomas. This was suggested by presentation, past medical history, and medical imaging. Ultimately, upon surgical intervention and pathological examination, the diagnosis was found to be borderline seromucinous cysts. It is interesting to note that there have been other documented cases highlighting how seromucinous borderline tumors were confused for an endometrioma. During a similar case report, a nontender mass was discovered while a physical examination of a 39-year-old pregnant Japanese patient was conducted [9]. After an initial sonogram, the mass was thought to be ovarian cancer; however, after further imaging through an MRI examination, the mass was thought to be a decidualized endometrioma [9]. Unlike the patient presented in this case study, the 39-year-old patient had not had any past medical history of endometriosis, pain in the abdominal region, or abdominal surgery [9]. Only after surgical intervention and receiving the histopathological results was the cyst discovered to be a seromucinous borderline tumor [9].
As seromucinous cysts are frequently borderline, the risk of serious outcomes is small [8]. Because of the patient's past medical history and negative tumor marker tests, there was an initial assumptive diagnosis of a benign endometrial cyst. As a result, against the doctor's recommendation, the patient did not feel immediate removal necessary, allowing for the patient to prolong surgical intervention. After the surgical intervention and subsequent pathology reports, it was discovered that the diagnosis was in fact a seromucinous cystadenoma with focal borderline features. It became clear that surgical removal was quite indicated. It is valuable to consider all differentials when diagnosing in order to inform the patient of all possibilities that could arise and what could occur if one delays treatment.
Conclusions
The symptoms, modes of diagnosis, and various treatments involved with endometriosis were discussed in this case study. The signs and symptoms that are used to diagnose seromucinous cysts were also examined.
In this case study, a female patient presented with what appeared to be signs of endometriosis but ultimately ended up being a seromucinous cyst with borderline features. As previously stated, in this case, because it was assumed that the right ovarian cyst was a recurrent endometrioma, the surgery was delayed by the patient, allowing the seromucinous cyst a chance to grow. This is clinically relevant because it shows just how important it is to keep in mind a broader differential diagnosis in order to avoid possible negative outcomes and for physicians to provide the best possible treatment for their patients.
Additional Information Disclosures
Human subjects: Consent was obtained or waived by all participants in this study. Conflicts of interest: In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work. | 2022-06-30T15:21:48.971Z | 2022-06-01T00:00:00.000 | {
"year": 2022,
"sha1": "fc1df4b551ca5b390d4436278044133f30387bca",
"oa_license": "CCBY",
"oa_url": "https://www.cureus.com/articles/95487-seromucinous-cystadenoma-presenting-as-endometriosis-complications-in-a-57-year-old-female-a-case-report.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a3574e4899826994b5b230596acfeced3dbfbcac",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
6928260 | pes2o/s2orc | v3-fos-license | Local Deep Neural Networks for Age and Gender Classification
Local deep neural networks have been recently introduced for gender recognition. Although, they achieve very good performance they are very computationally expensive to train. In this work, we introduce a simplified version of local deep neural networks which significantly reduces the training time. Instead of using hundreds of patches per image, as suggested by the original method, we propose to use 9 overlapping patches per image which cover the entire face region. This results in a much reduced training time, since just 9 patches are extracted per image instead of hundreds, at the expense of a slightly reduced performance. We tested the proposed modified local deep neural networks approach on the LFW and Adience databases for the task of gender and age classification. For both tasks and both databases the performance is up to 1% lower compared to the original version of the algorithm. We have also investigated which patches are more discriminative for age and gender classification. It turns out that the mouth and eyes regions are useful for age classification, whereas just the eye region is useful for gender classification.
Introduction and Related work
Gender classification and age estimation can benefit a wide range of applications, e.g. visual surveillance, targeted advertising, human-computer interaction (HCI) systems, content-based searching etc [10]. In order to solve these two tasks accurately and efficiently, a modified version of the Local Deep Neural Networks (LDNN) [9] is proposed. The proposed method achieves similar results to most stateof-the-art methods while the amount of computation needed is largely reduced. We also investigate which face regions are important for gender/age-group classification. The conclusion drawn is that the eyes and mouth regions are the most informative ones for age classification, whereas just the eyes region is important for gender classification.
Age and gender classification using CNNs
A convolutional neural network (CNN) consisting of three convolutional layers was used in [8] to achieve 86.8±1.4% and 50.7±5.1% accuracy for gender and age-group classification, respectively, on the Adience database. Random patches of size 227 by 227 were cropped from the original and mirrored images in order to augment the training data and avoid overfitting. When testing, five 227-by-227 patches were generated from every single image. Four of them were aligned with the four corners of the image and one of them was aligned with the centre of the image. These five patches were then reflected horizontally, resulting in ten patches generated from every single image. The final prediction of the image was the average of these ten patches. The results of age classification on the Adience database were improved to 64.0±4.2% using the VGG-16 CNN architecture pre-trained on ImageNet [11].
shown in Figure 1. Initially, filters, e.g. edge or corner detectors, are used to find edges in images. Subsequently, patches are extracted around the detected edges. Finally, all patches are used to train neural networks. During testing, the predictions of all the patches obtained from one image are averaged as the final result for that image [9]. Classification rates of 96.25% and 77.87% in image and patch level, respectively, were achieved on the LFW database and 90.58% and 72.83% in image and patch level, respectively, were achieved on the Gallagher's database using LDNN [9]. Using patches obtained in this way seldom leads to overfitting since most redundant information has been removed during filtering. Therefore, a simple feed forward neural network without dropout is used [9].
Labeled Faces in the Wild
The Labeled Faces in the Wild (LFW) database contains 13,233 face photographs labeled with the name and gender of the person pictured. Images of faces were collected from the web with the only constraint that they were detected by the Viola-Jones face detector [6]. There are four versions of LFW -the original version, funneled version [4], deep funneled version [5] and frontalised version (3D version). LFW is an imbalanced database including 10,256 images of men and 2,977 images of women from 5,749 subjects, 1,680 of which have two or more images [6] [7]. The 3D version is used in this work since the images are already cropped, aligned and frontalised properly as shown in Figure 2.
Adience database
The Adience database contains 26,580 face photos from 2,284 individuals with gender and age labels of the person pictured. The images of faces were collected from the Flickr albums and released by their authors under the Creative Commons (CC) license. The images are completely unconstrained as they were taken under different variations in appearance, noise, pose, lighting etc [2].
There are three versions of the Adience database, including the original version, aligned version and frontalised version (3D version) with 26,580, 19,487 and 13,044 images respectively [2]. The 3D version is used in this work since most images are already frontalised and aligned to the centre of the image. However, images in the Adience database 3D version may be extremely blurry or frontalised incorrectly as shown in Figure 3. Additionally, people in the images could show emotions. Therefore, patches extracted from those images may not always contain the same face region which may result in lower classification rates. There are eight age groups and another 20 different age labels in the Adience database as shown in Table 1. Using only the eight age groups is not feasible as no images for the 6 th group -(38, 43) exist in folds 1 or 2. Therefore, images that are labeled with one of the 20 age labels were merged to one of the eight age groups. In order to use all images labeled with age and to make the data more balanced, images were grouped as following: • The 1 st age group: images labeled with '(0, 2)' or '2' • The 2 nd age group: images labeled with '(4, 6)' or '3' or '45' • The 7 th age group: images labeled with '(48, 53)', '55' or '56' • The 8 th age group: images labeled with '(60, 100)', '57' or '58' We have defined this merging protocol so results may not be directly comparable with other works that potentially use a different protocal. However, we were not able to find any publicly available protocol.
One-off age classification rates
Due to the similarity of people in adjacent age groups, images classified into adjacent age groups are also considered to be classified correctly and the corresponding result is called one-off classification rate [8].
Three subsets of the Adience database 3D version
Images labeled with gender are not necessarily labeled with age groups and vice-versa. This leads to three subsets of the Adience database 3D version. The 1 st subset consists of 12,194 images labeled with gender. The 2 nd subset consists of 12,991 images labeled with age. The 3 rd subset, where the experiments were conducted, consists of 12,141 images labeled with both gender and age as shown in Table 2.
The proposed LDNN nine-patch method
The LDNN approach (see Section 1.2) generates hundreds of patches for every single image. As a consequence, it is computationally expensive to train a neural network when there are thousands of images. In order to reduce the computational cost, we propose the nine-patch method which generates only nine patches for every single image. Figure 4 shows an example of the nine patches of an image. The nine patches are indexed from left to right and then from top to bottom; The top left patch is indexed as the 1 st patch and the bottom right is indexed as the 9 th as shown in Figure 4. The height and width of patches are set to half of the height and width of images. The overlapping ratio of adjacent patches is 50%; the 2 nd patch overlaps 50% with the 1 st and the 3 rd patch and the 4 th patch overlaps 50% with the 1 st and the 7 th patch.
Pre-processing
Initially, images are converted into grey-scale (from 0.0 -black to 1.0 -white). For images in the Adience database, the face is cropped using a fixed box -[20 20 100 100], which indicates that the coordinate of the top-left corner of the fixed box is [20 20] and the height and width of the box are both 100. For images in the LFW database, no box is needed since images are already cropped and aligned perfectly. Subsequently, images are resized to 60 by 60. Then the nine 30-by-30 patches are generated. For every single patch, pixel values were normalised to the standard Gaussian distribution with zero mean and unity variance.
Methodology
During training, the nine patches are extracted as shown in Figure 4 and used to train a neural network. The testing procedure is shown in Figure 5. When testing, for every single image, the corresponding nine patches are classified by the trained neural network. Subsequently, the outputs or the posteriors of the nine patches are averaged. The averaged result is the final prediction of the image and the image will be classified to the class with the highest posterior. We have run a series of experiments on the LFW database, including optimisation of parameters and different combinations. In order to test the nine-patch method, five-cross-validation using the same five folds as [9] is used. Around 2 3 of patches of men are randomly discarded in each fold to balance the data.
Parameters of neural networks
We have run a series of experiments to identify a suitable set of training parameters. We have experimented with the number of hidden layers, number of hidden units per layer, dropout rates, activation functions (leaky ReLU), learning rate update policies etc. As a result, we have found that the following set of parameters leads to good performance: • learning algorithm: SGD + Momentum
A replication of LDNN
Firstly, the original LDNN method is replicated. After images have been cropped and resized to 60*60, a sobel/canny edge detector is used to obtain edges from the images. Subsequently, a low pass filter, e.g. a Gaussian filter, is used. Then a threshold is set so that strong edges, e.g. contours, would be preserved while noisy points and weak edges would be removed, resulting in the binary mask shown in Figure 6. Around every single white pixel in the binary mask, a patch with size 13*13 is generated. There is a trade-off between the number of patches and the information left after pre-processing. When most useful edges are preserved, the number of patches is extremely large. For example, in the case shown in Figure 6, a nine-by-nine Gaussian kernel N (µ = 0, σ = 2) is used as the low pass filter and the threshold is set to 0.2. There are 425/485 patches on average for one image when the Sobel/Canny edge detector are used respectively as listed in Table 3, which leads to 425*2647 = 1,124,975 or 485*2647 = 1,283,795 patches respectively for a single fold of the LFW database. Due to the huge number of patches and the limited amount of memory, it is not feasible to train a neural network using all of the four training folds. Instead of using three or four folds, only one fold was used for training and one fold was used for testing, leaving three folds unused. This leads to classification rates of 91.55%/93.68% as shown in Table 3, which are lower than those reported in [9] (∼ 96%). However, this could well be the result of using a smaller training set that contains only one fold.
Patch size and overlapping
In order to identify the most suitable patch size, we have run a series of experiments with different patch sizes. The highest classification rate in image level without overlapping is obtained when patch size is 30 as shown in Table 4. As shown in Figures 7 and 9, the reason that patch size 15 and 30 lead to over 94% classification rates in image level could be that important facial regions, e.g. eyes, are not split into more than two patches. Especially when the patch size is 30, the 1 st patch of the first row would contain the entire left eye and the second would contain the right, which leads to the highest classification rate in image level without overlapping. Additionally, using patch size 30 requires the smallest amount of computation. Thus, patch size is set to 30. Using 50% overlapping ratio increases the classification rates by 1% approximately while using 75% overlapping ratio does not further improve the performance as shown in Table 5. Therefore, the overlapping ratio is set to 50%, which is also less computationally expensive.
Classification rate of individual patches
For every single experiment using the nine patches, the classification rate of each patch is recorded. Results are shown in Table 6. It is consistent with all experiments that the highest classification rate comes from the patches that contain the eye. Compared with the last three patches that do not contain the eyes as shown in Figure 10 to Figure 12, the classification rates of the first six patches are 2 to 4% higher.
Five rows
In this experiment, five rows with 60-by-20 pixels are cropped for every single image as shown in Figure 13. Five neural networks are trained using one of the five rows separately. When testing, for Results are shown in Table 7. As expected, the highest classification rate of a single row comes from the second row, which contains the two eyes as shown in Figure 13. Combining the five rows achieves 93.73% classification rate, which is higher than any single row results. However, the performance is lower than the nine-patch method.
Using entire images
Images are converted into greyscale and are resized to [32 32] and are mapped to the standard Gaussian distribution. The neural networks are then trained using entire images directly, which results in 92.72% classification accuracy.
Combinations
In this section, we experiment with several combination of the nine patches, the entire images and the five rows. The best combination is shown in Figure 14 and results in a classification rate of 95.64% as shown in Table 8. Figure 14: Combining the nine-patch method with entire images and the second row The best performance was achieved by combining the neural network (A) trained using the ninepatch method with the neural network (B) trained using entire images and the neural network (C) trained using the second row. When testing, for every single image, the nine patches are classified by the neural network (A) and the image itself is fed to the neural network (B) and the second row of the image is fed to the neural network (C). When combining these three sets of neural networks, the posteriors from the neural network (A) are multiplied by 1 9 , which means an entire image shares the same weight as the nine patches extracted from the image. To summarise, the highest classification rate that the nine-patch method can achieve on the LFW database is 95.072% and combining the nine-patch method with entire images and the second row results in 95.64%, both of which outperform the highest results that DNN, Deep Convolutional Neural Networks (DCNN), Gabor+PCA+SVM or BoostedLBP+SVM can achieve on the LFW database [9].
Additionally, compared with LDNN, the nine-patch method reduces the amount of computation largely while the classification rate is only around 0.5% lower.
Experimental results on the Adience database
Initially, experiments for gender/age classification are conducted separately on the 3D version. Subsequently, the neural networks that are responsible for gender classification are used to assist age-group classification. Similarly, five cross validation using the same folds as [8] is conducted to evaluate the performance.
Gender classification
We have trained neural networks using the nine-patch method and entire images with the same parameters as in Section 4.1. Results are shown in Table 9. The nine-patch method leads to a slight improvement (1% approximately) compared with training neural networks using entire images. Compared with the results on the LFW database, the gender classification rates achieved on the Adience database are approximately 17% lower. The main reason is that images in the Adience database are not frontalised perfectly. As a consequence, patches do not always contain the same region of faces. The classification rates for each patch are shown in Table 10. Unlike the LFW database, for the Adience database 3D version, higher classification rates only come from the second row, which consists of the 4 th , the 5 th and the 6 th patches while the classification rates of the other six patches are 2% -4% lower. The reason may be that the first three patches may not contain the eye or may be distorted seriously. For the second row (the 4 th to the 6 th patch), the patches contain the entire eye and a part of the nose as shown in Figures 15 to 17. Therefore, the classification rates of patches in the second row are the highest.
Age-group classification
We also carry out age-group classification using entire images and the nine-patch method. Results are shown in Table 11. The age-group classification rate using the nine-patch method is 40.25%. Compared with using entire images, the nine-patch method increases the classification rate by 0.75% approximately and increases the one-off classification rate by 1% approximately. The age-group classification rate of each patch is shown in Table 12. Similarly, the highest classification rates come from the patches in the second row that contains the eyes. In addition to the second row, the classification rates of the 2 nd patch and the 8 th patch are 2.5-3.5% higher than those of the other patches. The reason may be that the 2 nd patch contains the inner corners of the two eyes and the 8 th patch contains the mouth.
Age classification for men/women
We conduct age-group classification for images of men and women separately using the nine-patch method and entire images. Results are shown in Table 13. In order to combine gender and age classification, the neural network (B) in Figure 18 is trained using 5,740 images of men only while the neural network (C) is trained using 6,410 images of women only.
The classification rate of each patch is shown in Tables 14 and 15. For images of men, the classification rates of the 1 st and the 3 rd patch are almost the same as those of the 7 th and the 9 th . However, for images of women, the classification rates of the 1 st and the 3 rd patch are about 2% higher than those of the 7 th and the 9 th . This indicates that the 1 st and the 3 rd patches, which contain the inner corners of the eyes, are more important to estimate women's age groups.
Combination of age/gender classification
We run a series of experiments to combine gender and age-group classification. Results are shown in Table 17 and the process is shown in Figure 18. Three sets of neural networks already trained in the previous experiments are used. The neural network (A) is responsible for gender classification and the neural networks (B) and (C) are responsible for age-group classification for men and women, respectively. Initially, every single patch is classified by gender. If the patch is recognised as a patch of men, it would be fed and classified by the neural network (B). If the patch is recognised as a patch of women, it would be fed and classified by the neural network (C). Combining age/gender classification increases the classification rates by 1.5% and increases the one-off classification rate by 1% approximately.
If images/patches are classified incorrectly by the neural network (A) that is responsible for gender, they would be fed into the wrong neural network to do age-group classification. To investigate the effect on the performance of this issue, we fed images of men into the neural network (C) that is responsible for female age-group classification and vice-versa. Results are shown in Table 16. In the former case, the classification rate decreases to 36.15%/73.07%(one-off) and in the latter case the age-group classification rate decreases to 36.30%/66.82%(one-off).
To summarise, using the nine-patch method, the gender and age-group classification rates on the Adience database 3D version are 78.63% and 40.25% respectively, which are approximately 1% higher than using entire images. The combination of gender/age classification increases the age-group classification rates to 41.82% and increases the one-off classification rate to 77.98%. The results are similar to those established without using CNN [8]. | 2017-03-24T16:41:19.000Z | 2017-03-24T00:00:00.000 | {
"year": 2017,
"sha1": "f77c9bf5beec7c975584e8087aae8d679664a1eb",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "f77c9bf5beec7c975584e8087aae8d679664a1eb",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
263821769 | pes2o/s2orc | v3-fos-license | Assessing concrete nest boxes for cavity-nesting bees
Artificial nest boxes for solitary bees and other cavity-nesting Hymenoptera are increasingly used for a variety of purposes, including ecological research, crop pollination support and public outreach. Their attractivity and colonization success by cavity-nesting solitary bees depend on their design and placement, including hole dimensions, orientation and the neighboring habitats and available resources. While most bee nest boxes are made of wooden materials, we assessed here the suitability of perennial, concrete nest boxes for cavity-nesting bees. We carried out a three-year nesting survey of 52 custom-made nest boxes located in 11 different sites throughout France and totaling 2912 available holes of 6, 8, 10 or 12 mm in diameter. Concrete nest boxes successfully attracted reproductive females of solitary bee species and supported successful larval development until the emergence of new individuals. Preferred cavities were the smallest ones (6-8 mm), located at the lowest tested positions above ground (31-47 cm) and oriented southward. Local bee populations established in nest boxes steadily increased throughout the three successive seasons in nearly all study sites. The cavity-nesting bee communities were mostly composed of rather common and generalist species, but also comprised a foraging specialist. Additionally, two cleptoparasitic bee species were detected. All species belonged to the Megachilidae. We further discuss the effects of neighboring urban and natural habitats as potential source or sink of nesting bees, as well as opportunities of concrete nest boxes as tools for urban agriculture and more generally for the new biomimetic urban designs to restore local ecosystem services in cities.
Introduction
In the face of declining wild bee populations (Biesmeijer et al. 2006;Potts et al. 2010;Burkle et al. 2013;Zattara and Aizen 2021), it is important to understand their biology, behaviour and how populations are changing in response to the evolution of their environment.Bee nest boxes, also termed 'bee hotels', 'nesting aids' or 'nesting traps' (von Königslöw et al. 2019) may have several roles to play in this context (MacIvor and Packer 2015;MacIvor 2017;von Königslöw et al. 2019).Nest boxes are man-made refuges for cavitynesting bee species, typically displaying a range of artificial holes or natural cavities where nesting females can build several brood cells in line.Cells are made of mud, resin, chewed leaves or pieces of cut leaves, depending on the bee species.Most common materials used to build nest boxes are drilled wood, hollow plant stems or pithy stems, including bamboo or reed, or tubes made up of a variety of materials such as paper, cardboard, glass or plastic (MacIvor 2017).They are often used as experimental tools in science to study host-parasite, host-predator and host-disease relations, to understand the biology, behaviour, life history traits and food preferences of bee species or to study biological invasions (MacIvor and Packer 2015;Geslin et al. 2020).They thus can serve as bioindicators of ecological changes and habitat quality (Gaston et al. 2005) and make it possible to monitor the evolution of local populations (Fortel et al. 2016;Geslin et al. 2020).
Nest boxes can also improve pollination services for plants, especially crops, when the intrinsic characteristics of nest boxes (e.g.cavity diameter and length) are designed to favour one or several species of interest (MacIvor 2017).Indeed, a growing number of solitary bees are now reared in nest boxes for commercial pollination purposes, including for instance some Osmia species for orchard pollination (Bosch and Kemp 2002;Koh et al. 2018;Boyle and Pitts-Singer 2019) and the alfalfa leafcutting bee Megachile rotundata for alfalfa seed crops pollination (Bosch and Kemp 2005).
Promoting wild (solitary) bee diversity and conservation is frequently invoked when setting up nest boxes, particularly in urban and peri-urban areas.However, nest boxes may often harbour more individuals of invasive alien species than endemic ones, e.g. the invasive giant resin bee Megachile sculpturalis or the alien wasps Isodontia mexicana (MacIvor and Packer 2015;Fortel et al. 2016;von Königslöw et al. 2019;Geslin et al. 2020), potentially competing with native bees for nesting cavities (Straffon-Díaz et al. 2021).Artificial nest boxes may also promote the proliferation of parasites, predators and diseases because nests are concentrated in the same area, which rarely exists naturally for non-gregarious species (MacIvor and Packer 2015).This therefore stresses the need for more research to identify best practices for optimising nest box benefits for local bee populations.
Last but not least, bee nest boxes are used to raise public awareness about the often overlooked existence of solitary bees and to observe their nesting behaviour (Hane and Korfmacher 2022).Nest boxes may therefore be viewed as useful tools for participatory science projects, assisting researchers in the study of the ecology, behaviour and diversity of solitary bee assemblages, while pursuing public outreach objectives at the same time.This can be particularly efficient in urbanized areas that can accommodate a substantial diversity of cavity-nesting bee species (Fortel et al. 2016;Fauviau et al. 2022).
The nest box occupancy or colonization success, often expressed as the percent of holes eventually occupied by bee nests after a predetermined exposure period, depends on a range of intrinsic and extrinsic nest box characteristics, as reviewed by MacIvor (2017).Extrinsic characteristics such as the surrounding landscape composition and proximity to floral resources are obviously influential (Everaars et al. 2011;MacIvor and Packer 2015;Maclvor 2016), though their respective effects on occupancy appear difficult to disentangle owing to the multifaceted nature of the wild bee fauna in terms of foraging behavior and habitat preference.Conversely, the importance of some intrinsic nest box characteristics is well established, such as orientation of the openings respective to the sun (i.e.southward in the northern hemisphere) or hole diameter, with smaller holes (e.g. 4 to 8 mm) being usually attractive to more bees than larger ones (von Königslöw et al. 2019).Other intrinsic nest box characteristics remain poorly documented to date, such as shading, orientation to prevailing winds, or height above ground or vegetation (e.g.Budrienė et al. 2004;Everaars et al. 2011;Martins et al. 2012).
Nest box material is arguably another critical point for bee occupancy.Most studies that have compared nest boxes made-up of diverse materials found significant differences in terms of bee occupancy.Bee nests may be more abundant in drilled logs compared to hollow stems or commercial grooved boards (Fortel et al. 2016;González-Zamora et al. 2021).Likewise, the abundance of emerging bees may vary significantly among drilled logs or pithy stems from different plant species (Fortel et al. 2016), which illustrates the high variability of potential nesting outcomes from one box design to another.
While the majority of nest boxes built for commercial or research purposes are made up of wooden materials, to our knowledge, concrete or other mineral materials have rarely been evaluated in the scientific literature -possibly because of the technical difficulty to manufacture standard nest boxes with such substrates.Among the exceptions, Martins et al. (2012) found that bees successfully nested in cardboards tubes inserted in holes drilled in vermiculite, i.e. a composite mineral substrate.Hole occupancy by bees was, however, five times less in the vermiculite substrate as compared to wooden controls.More recently, Shaw et al. (2021) evaluated the use by solitary bees of holes in bricks, known as 'Bee Bricks'.Authors have reported over two consecutive years the presence of nesting bees, holes being typically capped with mud, cut leaves and chewed leaves.Brick hole occupancy ranged from 1.3% (year 1) to 2.8% (year 2) out of several thousand available holes.Wooden control occupancy values, respectively 1.1% and 0.7%, were lower, but not significantly different from brick hole occupancies.The identity of nesting bee species, as well as their actual emergence success was not reported, however.Still, this latter study offers interesting new insights into the use of mineral (non-wooden) nest boxes by bees and their potential scientific and societal interest.
The overall objective of our study is to assess the suitability of concrete nest boxes for cavity-nesting bees, based on a three-year nesting survey involving a participatory research action.As a mineral substrate, one possible advantage of concrete for cavity-nesting insects is its resistance and durability, compared to wooden substrates that may need to be regularly replaced due to natural decomposition or degradation by weather and xylophagous insects.In line with these characteristics, concrete nest boxes may be further embedded into more sustainable urban designs and building restoration projects.They may for instance contribute to support the increasing demand for urban pollinators along with the development of urban agriculture in community allotments or on green roofs (e.g.Hofmann and Renner 2018).More broadly speaking, this is in line with the novel approach of 'biomimetic urban designs' that seeks to reconcile social and ecological issues by achieving positive net impacts on ecosystem services (Blanco et al. 2021).Biomimetic buildings most commonly focus on biophysical ecosystem services such as water collection or carbon sequestration through augmented vegetation covers around and on buildings, and more rarely consider fauna and habitat management schemes (Blanco et al. 2022).Still, one may eventually consider pollination services as a part of those biomimetic designs, by the inclusion of wild bee concrete nesting aids.
Judging from the apparent nesting flexibility of some cavity-nesting species, e.g.Osmia bicornis and O. cornuta (Fortel et al. 2016), we predicted that some species may thrive in concrete cavities.Specific objectives were (i) to ascertain the attractivity, establishment and development of cavity-nesting bee communities in concrete boxes, (ii) to determine the intrinsic characteristics that promote box occupancy, particularly hole diameter, height above the ground and cardinal orientation, (iii) to assess whether the presence of urban and natural habitats in their immediate vicinity may further act as a source of cavity-nesting bees and (iv) to provide a broad description of the bee community attracted by concrete nest boxes, including species occurrence frequencies, expected richness and conservation status.Strength and possible weaknesses of concrete bee boxes are finally discussed, along with future research perspectives.
Concrete bee nest boxes
Nest boxes were designed and manufactured specially for the study using ultra high-performance fiber-reinforced concrete (UHPFC), also named SMART-UP (Vicat company, Isled'Abeau, France).The strength, durability and mechanical properties of the SMART-UP concrete make it a common material in construction and building technology, including for the construction of complex outdoor shapes and smaller decorative elements.
Concrete boxes were conceived as 25 × 25-cm wide removable modules that could be integrated into various kinds of urban furniture.Each box displays 56 holes 8-cm deep designed to offer nesting opportunities to cavity-nesting bees (Fig. 1).To guarantee a smooth finish of the inside of the holes, nest boxes were entirely molded in one piece with their holes, rather than having their holes drilled at a later stage.For the sake of the study, nest boxes were integrated into planters offering ornamental nectariferous and polliniferous plants such as lavender Lavandula angustifolia, rosemary Rosmarinus officinalis and thyme Thymus vulgaris and provided with educational displays about solitary bee nesting biology (Fig. 1).All boxes were identical in terms of hole number, diameters and distribution.
In each box, the 56 holes had diameters adapted to bee nesting (von Königslöw et al. 2019): 23 holes 6 mm in diameter, 11 holes 8 mm, 10 holes 10 mm and 12 holes 12 mm.The holes were arranged symmetrically, with respect to diameters, along a horizontal axis when the boxes are placed in the planters.On each planter, two boxes were exposed in the 'lower' position (holes arranged between 31 and 47 cm from the ground) and two others in the 'higher' position (holes between 49 and 65 cm from the ground), i.e. a total of four boxes and 224 holes per planter.
Study sites
A total of 14 planters were surveyed during three consecutive years in 2018-20.Planters were spread over 11 sites among different regions of France and located 30 to 720 km apart from each other (with the exception of two sites located just 6 km apart, see Fig. S1 in the online Supplementary Information).Three sites held two planters, located at least 200 m apart from each other.Study sites were all located in temperate continental to oceanic biomes, except the southernmost ones (Colomiers and Portes-lès-Valence) which were on the edge of the Mediterranean biome characterized by a more diverse bee fauna.
Nest box placement
All sites were private plots operated by the box designer (Vicat company) and its subsidiaries.They were typically sites of concrete activity embedded in a landscape mosaic composed of agricultural plots.Some sites also include in the direct vicinity of nest boxes (within a 50-m radius) either (i) extents of natural habitats (grasslands, hedgerows and other semi-natural elements), (ii) built-up and (sub-)urbanized areas, or (iii) both natural habitats and urbanized areas.The potential contribution of those natural and urbanized areas as possible habitat sources of cavity-nesting bees was assessed as an environmental factor liable to influence colonization success (see below Data analyses).The prevailing orientation of planters, either southwards or northwards, was also recorded, inasmuch as orientation is expected to be an important driver of nest occupancy.Five out of 14 planters were moved away by 50 to 100-m from one season to the next due to changes in access conditions.In this case, the information on placement (orientation and presence of nearby urbanized areas and natural habitats) was updated, resulting in a total of 19 different placement combinations.Statistical analyses took into account these placement changes with respect to each individual planter.
Nesting surveys
The nesting surveys were intended to document the percent occupancy of holes by bee nests.They were carried out as a participatory research action by volunteer staff of the plot owner company who systematically reported capped holes at the end of each season of 2018-20.Participants were taught to recognize typical bee nest caps, made up of mud, cut or chewed leaves, plant fibers or resin.Holes obstructed by plugs of thin herbaceous twigs or other materials, likely originating from wasps or other arthropods, were noted as not available to nesting bees.
Emergence surveys
Emergence surveys were intended to (i) validate and assess the accuracy of the participatory nesting surveys provided by engaged volunteers, (ii) ascertain the presence of bee nests, (iii) evaluate their emergence success and (iv) characterize the bee species nesting community in the tested concrete nest boxes.After each nesting season, a subset of boxes with evidence of nesting activity were removed from planters to enter an emergence routine survey throughout the season n + 1 following box exposure in season n, i.e. in 2019, 2020 and 2021.A single one out of the four nest boxes per planter was removed, and no nest box was removed from planters with obviously very few nesting activity.Boxes removed from planters were replaced by new ones, while the others remained in place for the next season.
Removed boxes were all gathered at the same location (Bees & Environment unit, INRAE research center, Avignon, France), stored individually in 30 × 30 × 30 cm collapsible insect rearing cages and placed in an insect-proof tunnel to protect them from heavy rains or strong wind.Cages were carefully checked every other day for the presence of newly emerged individuals.All individuals were collected for later identification to species level by a network of taxonomist experts recognized by the French National Inventory of Natural Heritage (INPN -Inventaire National du Patrimoine Naturel, Muséum National d'Histoire Naturelle, Paris, France).The emergence surveys lasted from the first recorded emergence, typically around mid-February, until at least July and after no new emergence was recorded from any caged box over a period of six consecutive weeks.Boxes were carefully inspected before and after the emergence period in order to double-check hole occupancy data returned by the participatory nesting surveys, and to keep track of holes with caps removed, excavated or bored (successful emergence) vs. those that remained intact (no obvious emergence).
The study was primarily designed to monitor nest boxes as a whole and assess broad patterns of colonization and emergence success of the local cavity-nesting bee fauna.It was therefore not possible to obtain at this stage the high-resolution monitoring of individual nests to document thorough reproductive success values for each species (offspring size per nest or per nesting female), nor the species-specific nesting preferences with regard to the hole characteristics.
Data analyses
Validation of the participatory nesting survey data.As a preliminary precaution, we performed a Pearson correlation test to compare the percent occupancies of nest boxes observed prior to emergences with those actually returned by the participatory nesting surveys on the same boxes (n = 29 boxes, see results).Occupancies were defined as the proportion of capped holes in each nest box, setting apart unavailable holes likely to have been obstructed or clearly occupied by non-bee arthropods.Additionally, in order to detect potential biases arising from participatory surveys, a paired t-test was performed to compare occupancy values obtained from unexperimented volunteers with those obtained from our own observations.
Interannual establishment and development of nesting activity.In a second step, we assessed whether the nesting activity would overall increase year after year, as one might expect under the hypothesis of population or community establishment and development.
To do so, we used the occupancy binary nesting data at individual hole level (occupied vs. unoccupied), and computed the overall evolution of occupancy probabilities throughout study years by the mean of a binomial generalized linear mixed model (GLMM).The study year, ranging from 1 to 3, was implemented as a fixed effect in the model, while specifying a random grouping structure to account for the non-independency of nesting data originating from the same planter, nest box and hole diameter category.
Intrinsic nest box characteristics that promote occupancy.After confirmation of the establishment and development of a nesting activity, we sought to assess which nest box intrinsic characteristics would promote their occupancy.We focused on three candidate correlates of occupancy as a part of a GLMM modelling framework: hole diameter (as a continuous quantitative variable, in mm), height above ground (low vs. high box positioning) and prevailing orientation (southward vs. northward placement).We also considered all the two-way interactions between these candidate correlates, as occupancy may respond differently to a given correlate conditionally on another one.
For the sake of parsimony, we favored a stepwise model simplification approach to identify the smallest subset of relevant correlates and statistical interactions.We first computed a full model, comprising all three candidate correlates as fixed variables, as well as their two-way interactions.The planter identity was specified as a random grouping variable.The temporal dependency of repeated observations on the same planter was further accounted for by allowing random slopes across years.Second, we simplified the full model down to the minimum adequate model, i.e. the model that returned the most parsimonious tradeoff between complexity and fit to data, as given by the Akaike Information Criterion (AIC).We used a backward stepwise model simplification, deleting sequentially the terms that did not contribute to reduce AIC, starting from interaction terms first, and maintaining simple terms whenever they were involved in a relevant interaction.Third, we performed Wald tests to assess the significance of each term in the minimal adequate occupancy model.
Environmental factors acting as a source of nesting bees.Once the intrinsic nest box factors accounting for occupancy variations had been satisfactorily identified, we tested whether the presence of urban and natural habitats in the immediate vicinity of a planter might further act as a source of cavity-nesting bees.To do so, we implemented into the minimum adequate occupancy model the information on presence or absence of urbanized areas and of natural habitats as additional fixed binary factors.A significant positive effect might be interpreted as a source of cavity-nesting bees, indicative of a relevant placement for nest boxes to assist population expansion or restore connectivity among habitats.
Emergence success and description of the cavity-nesting bee community.In a final step, we summed up the results of the emergence surveys to provide an overview of the cavity bee communities who successfully nested in the concrete boxes.We computed summary data on emergence success (proportion of capped holes with evidence of emergence), species occurrence frequencies, average observed species richness as well as total expected cumulative species richness.
All analyses were carried out using R (R Core Team 2022).GLMMs were computed using the package glmmTMB version 1.1.4(Brooks et al. 2017).Expected richness estimates were obtained using the package vegan version 2.6-2 (Oksanen et al. 2022).Expected cumulative richness curve was plotted using the package iNEXT version 3.0.0(Hsieh et al. 2022).
Nesting surveys and emergence surveys
The 14 planters totaled 52 nest boxes on display in the first year, 49 in the second year and 45 in the third year, i.e. a total of 2912, 2744 and 2520 holes, respectively.This led to 7866 binary hole occupancy records (occupied vs. unoccupied) out of 8176 (96.2%), considering that about 3.8% of the holes were judged unavailable to bees due to occupancy by other arthropods or other types of obstruction.
A subset of 8, 6 and 15 nest boxes with at least one occupied hole were removed at the end of the first, second and third study year, respectively, for the emergence surveys (in total, 29 boxes from 11 sites).A total of 686 newly emerged individuals were collected out of the emergence cages and identified to 11 different species, all from the Megachilidae (see below).Wasps, flies or other arthropods were seldom collected, suggesting that, overall, they marginally influenced the surveys.
Validation of the participatory nesting survey data
Volunteer participants returned consistent and reliable occupancy data, closely correlated with those recorded by our observations on the same nest boxes (Spearman rank correlation tests, n = 29 boxes, R = 0.98, P < 0.001).Errors appeared trivial, and generally biased towards a slight 3.0% underestimation by volunteer participants (Fig. 2).A paired t-test indicated that this difference was not significant -though close to the statistical significance threshold (t=-1.98,n = 29 boxes, P = 0.056).This bias mostly occurred due to undetected, inconspicuous caps that were positioned deeper inside holes, but was considered too subtle to affect the overall nesting statistics.
Interannual establishment and development of nesting activity
The average hole occupancy steadily increased over time, from 2.9 to 11.6%, and then to 25.3% for the first, second and third year, respectively.The binomial GLMM confirmed a highly significant temporal increase of nest occupancy throughout the nesting surveys (n = 7866 holes, z = 23.43,P < 0.001).A tremendous variability among sites and years was observed, however, with occupancy eventually reaching 70,5 to 98,8% on the third year in three sites, while remaining below 20% in most of other sites.
Intrinsic nest box characteristics that promoted occupancy
All three candidate correlates of hole occupancy contributed to explain a significant part of total deviance (GLMM, accounting for interannual and site variabilities as random variables).After model simplification, the minimum adequate occupancy model retained hole diameter, height above ground, orientation as well as highly significant two-way interactions between height and orientation on one hand and hole diameter and orientation on the other hand (Table 1).The holes with smaller diameter, and with a southward prevailing orientation, had significantly higher probabilities of occupancy overall.Although the height factor (boxes set at higher vs. lower positions) was not significant per se, it significantly interacted with orientation in a way that reveals a strong preference of nesting bees for lower boxes when exposed southward, while no clear height preference appeared when boxes were exposed northward (Fig. 3).Likewise, orientation and hole diameter revealed a strong and significant two-way interaction, with steadily decreasing occupancy probabilities as diameters increased in southward nest boxes, while no clear diameter pattern appeared in northward nest boxes (Fig. 3).
Environmental factors acting as a source of nesting bees
The occupancy pattern returned by the minimum adequate model could be further refined by adding the environmental variables.The presence of urban and natural habitats in the immediate vicinity contributed to produce a fittest model (AIC reduced from 3398.3 to 3377.3), though not completely in accordance with initial expectations.Nearby natural Fig. 2 Correlation between observed occupancy recorded by volunteer participants and expected occupancy recorded during the bee emergence survey of the same 29 boxes.The straight line and shaded area stand for the expected-vs-observed correlation and its standard error, respectively.The dashed line indicates the 1-to-1 reference slope for perfect match between expected and observed occupancy records.Most deviations from expectations occur slightly below the reference slope, meaning that volunteers tended to slightly underestimate actual occupancy habitats tended to increase occupancy probability, but not in a significant way.Furthermore, nearby urban habitats exerted a highly significant decreasing effect on occupancy, which goes against the hypothesis of urban areas as a source of cavity-nesting bees (Fig. 4).
Emergence success and description of the cavity-nesting bee community
Average emergence statistics were computed for the 17 boxes out of 29 which had at least ten occupied holes (Table S1 in the online Supplementary Information).The emergence success, i.e. the percentage of occupied holes eventually displaying evidence of successful emergence, was higher for boxes with a single or two seasons of exposure (88.0 ± 16.0%, ).This means that a small, but cumulative, proportion of occupied holes may actually not represent viable nests leading to new emerging individuals.Based on the latter proportion of 37.4% (100% -62.6%) of non-emerging nests after three consecutive seasons, this cumulative emergence failure may be tentatively estimated to about 12.5% per year (37.4%/3).Further investigations made after the survey, however, revealed that the vast majority (93%) of those non-emerged nests were empty holes (false nests or fake nests, see Discussion).A total of 11 species were recorded over the three years of survey (5 species in the first year, 8 in the second year and 8 in the third year).Setting apart cleptoparasitic species, a maximum of three species were recorded in a given nest box after a single exposure season, and up to four species after two or three consecutive seasons.The total extrapolated species richness one may expect to cover throughout the survey does not vary much among richness estimates, typically ranging from 12 to 13 species (Chao: 11.6 ± 1.3; first-order Jack-Fig.3 Average nest box hole occupancy (%) by study year (first to third nest box exposure year), hole diameter (6, 8, 10 and 12 mm), height above ground (low vs. high positions) and prevailing orientation (southward vs. northward) knife: 12.9 ± 1.3; Bootstrap: 12.1 ± 1.0), while the extrapolated species accumulation curve predicts that a ceiling has probably already been reached over the surveyed sites (Fig. 5).
All sampled species belong to the Megachilidae and included medium to large species (body length 6 to 17 mm), typically nesting either in pre-existing plant or mineral cavities (Table 2).Most of them (8 out of 11 species) are known to be polylectic, collecting pollen on a variety of unrelated plants.One exception is Hoplitis adunca, an oligolectic species specializing on pollen from Echium sp.(Boraginaceae).Two other species, Coelioxys echinata and C. inermis, are cleptoparasites known to occur in nests of Megachile centuncularis and M. rotundata, respectively.
Nine of the 11 species were found in at least two sites (range [2; 6]), which underlines a certain consistency in the identity of species that nested in concrete nest boxes (Table 2, Table S2).The most abundant and frequently collected species was H. adunca, being found in six of the 11 sites (256 individuals, 37.3%), followed by Osmia bicornis (5 sites, 213 individuals, 31%) and O. caerulescens (5 sites, 76 individuals, 11.1%).
Discussion
The design of artificial nest boxes made of concrete is an original concept to our knowledge.We found in this study that concrete nest boxes succeed to attract reproductive females of several solitary bee species and support successful larval development until the emergence of new, viable individuals.Preferred cavities were the smallest ones (6-8 mm in diameter), Fig. 4 Average nest box hole occupancy (%, log-scale) as a function of the presence or absence of urban and natural habitats in the direct vicinity.The presence of urban areas exerted a significant negative effect on nest box occupancy (***), while the positive trend for natural habitats was not statistically significant (ns), see text and Table 1 for details.Bars delineate the median and quartiles, and vertical lines the 95% confidence intervals located at the lowest tested positions above ground (31-47 cm) and oriented southward.Local colonization rates steadily increased throughout the three consecutive seasons in nest boxes for nearly all study sites.The sampled nesting bee community appears not very diversified, with rather common and generalist species that typically nest in wood or hollow stems (rubicolous) or pre-existing cavities, but at least one of them is known to be foraging specialists.Much research remains to be done to understand the potential effect of neighboring habitats as potential source or sink of the nesting bees.In that respect, opportunities of concrete nest boxes as tools for urban agriculture are further discussed.
Interannual establishment and development of nesting activity
Owing to an effective participatory monitoring program, we found that concrete nest boxes obviously succeeded in attracting and hosting conspecific nesting bees, which eventually developed into local bee populations.Most of the experimental planters were colonized in the first year of exposure, and all were colonized after three years.The average hole occupancy followed an increasing trajectory from one year to the next, with many conspecific individuals emerging from the same nest box in a given year -for instance up to 75 Osmia bicornis (sex ratio 1.7 male for 1 female) were collected from a single nest box on the third year (Table S2).Several species, including O. bicornis, are gregarious or even philopatric, i.e. young bees build their nests close to the parental nest (Fortel et al. 2016), thus forming rapidly growing aggregations in the same area.Olfactory cues mays also play a role in attracting nesting bees close to already existing conspecific nests, either made during the current season or a previous one (Pitts-Singer 2007).
Intrinsic nest box characteristics that promote occupancy
The most attractive and rapidly colonized holes for cavity nesting bees were the smallest ones (6 mm diameter), oriented southward, and located at the lowest position above ground (31 to 47 cm).While the hole size and orientation preferences were already documented in the literature (von Königslöw et al. 2019), the preference for rather low positions appears to be a new observation.In previous bee nesting studies, the most commonly used cavities were often also the ones with the smallest diameters (< 8 mm) because there are more small bees than large ones (reviewed by von Königslöw et al. (2019).In their study, carried out with reeds and bamboos, von Königslöw et al. ( 2019) modelled the probability of cavity occupancy as a function of diameter and found that holes of 6 mm in diameter had about 30% chance of being colonized, against only 15% and 7.4% for those 9 and 12 mm in diameter, respectively.In our study, occupancies did not decrease as steeply with increasing diameter, but still eventually reached a nearly two-fold difference between smallest and largest ones (14.6%, 12.9%, 10.6%, and 8.2% respectively for 6 mm, 8 mm, 10 and 12 mm diameter holes).Bees may even use smaller cavities in wood or hollow stems, with e.g.Hylaeus spp.preferring holes with a diameter between 3 and 4.3 mm (Budrienė et al. 2004) or Ceratina sp.nesting in holes between 2.6 and 5 mm (González-Zamora et al. 2021).It is unclear, however, whether concrete nest boxes would have attracted more diverse bee species with smaller holes (< 6 mm), and these may also be technically difficult to produce.Not surprisingly, our study carried out in the northern hemisphere revealed that mean occupancies were nearly twice greater in southward nest boxes (15.0%) compared to northward ones (8.6%), most probably owing to better thermal inertia.Temperature inside an artificial nest box depends partly on the type of shelter and the material used, but also on the amount and timing of sunlight (Youngsteadt and Favre 2022).The orientation strongly impacts the internal temperature.It is generally recommended to orient nest boxes to the southeast and prefer a shaded location in the afternoon so that nests heat up more quickly with the morning sun.This increases the number of hours of foraging for adults, while avoiding the risks of extreme temperatures in the afternoon in situations of heatwave, which can be deadly for brood (MacIvor 2017;von Königslöw et al. 2019;Youngsteadt and Favre 2022).Yet, Wilson et al. (2020) found that Megachile rotundata preferred cooler, northward cavities when nesting in plastic nest boxes -which might be related to poorer thermal buffering of plastic cavities compared to concrete ones in our study.Nesting obviously depends on a complex interplay between material thermal properties, orientation, and shading, which remain to be elucidated.Importantly, our study is mostly indicative of temperate climates (see Material and Methods).The thermal properties of concrete nest boxes might lead to different nesting outcomes in hotter climates like the Mediterranean one.Thermal properties of concrete boxes should be the subject of a more targeted study in relation with nesting bee thermotolerance, especially in the current context of global warming and increasing heatwave frequencies and intensities in the southern regions.
Finally, hole depth is another nest box characteristic that appears critical for promoting occupancy.Depth effect has not been investigated herein because all nest boxes used in this study had standard holes of 8-cm in depth.Still, this depth appears somehow limiting given the recommended 15-cm depth in other studies (MacIvor and Packer 2015;von Königslöw et al. 2019).Shallow holes hold less cells and may lead to male-dominated sex-ratios (MacIvor 2017), among others because male cells are preferentially placed in the outermost positions, while female cells are located deeper in the nest for better protection against parasites and predators.Indeed, our survey returned a skewed sex-ratio with 1.64 males per female (426 M: 260 F, Table S1).Furthermore, we collected on average 2.0 individuals per successful nest, i.e. nests with evidence of viable emergence (686 individuals out of 346 successful nests, Table S1), which is arguably low.For comparison, Osmia bicornis may build about three brood cells and Heriades truncorum and Osmia lignaria about five brood cells in a cavity 15-cm deep (Bosch and Kemp 2001;MacIvor 2017;von Königslöw et al. 2019).Likewise, Megachile gomphrenoides (Torretta et al. 2012) and Megachile cephalotes (Akram et al. 2022) may build up to 7 or 8 brood cells in cavities 10-cm and 15-cm deep, respectively.Nevertheless, we may not draw firm conclusions on optimal hole depth in our study, owing to possibly inaccurate census of new emerging bees.Nests made in concrete boxes cannot be opened -as one would do with reeds or cardboard nesting tubes -in order to properly count cells or new emerging individuals.On several boxes, we even collected fewer individuals than the actual number of successful nests (Table S1).Some newly emerged individuals remained obviously undetected because they took refuge at the bottom of their original nest or of an adjacent hole.
It is also important to note that two species (set apart cleptoparasitic ones) are known in the literature to be possibly bivoltine, i.e. with two generations per year (Table 2).Therefore, our monitoring design, based on a simple diachronic comparison of nest presence at the beginning and the end of the nesting season, may have failed to cover all their active nests.
Colonization and emergence values reported herein may therefore be somehow underestimated for these species.
Environmental factors acting as a source or sink of nesting bees
Beyond the above-discussed nest box intrinsic characteristics, many environmental factors are liable to influence bee nesting activity.The present study did not aim to cover them all.We focused on two habitats whose presence in the close vicinity is liable to influence nest box occupancy, namely natural habitats and urbanized areas.Urbanization in particular is known to act as an ecological filter, affecting soil-nesting bee species more drastically than cavity-nesting ones (Fauviau et al. 2022).Yet, we found that the presence of urbanized areas in the close vicinity had an overall negative effect on nest box occupancy.Instead of acting as a source of cavity-nesting bees, urbanized areas may produce a nesting dilution effect.Indeed, human-made infrastructures may provide cavity-nesting species with many nesting opportunities (Maclvor 2016) similar to those offered by the concrete nest boxes.
Conversely, the presence of semi-natural areas or urban green spaces in the close vicinity is another important criterion for promoting bee diversity in nest boxes (Maclvor 2016).As central place foragers, bees will forage back and forth between their nest and neighboring foraging areas.This makes the proximity of adequate floral resources an essential condition for successful nest establishment.In our study, nearby natural habitats tended to increase the probability of occupancy, but not in a significant way.However, with only 19 different nest box placement combinations in total, obtained from 14 planters located in 11 surveyed sites, our study was unlikely to reach sufficient statistical power to fully address the influence of surrounding habitats and their potential interactions.This issue remains to be resolved using a more extensive sampling network to inform on the adequate placement to maximize nest boxes occupancy.
Emergence success and description of the cavity-nesting bee community
The emergence surveys indicate that concrete nest boxes offer suitable conditions for the development of larvae and the emergence of viable offspring from most bee nests.The apparent emergence success gradually decreased over time, but this was an inherent bias in our experimental design.Each year, a small subset of nests (about 12.5%) apparently failed to produce viable individuals -as no emergence was observed from them.As these nests remain capped from one year to the next, their proportion increased over time, so that estimates of emergence success appeared overly low in boxes surveyed after three successive years of exposure (62.6%) compared to those surveyed after only a single or two consecutive years of exposure (88.0%).
Brood loss due to disease or parasitism is a possible hypothesis to explain failing nests.There is little data in the literature with comparable accuracy for emergence success, but some similar results have been reported, e.g.83% survival in Osmia cornuta (Kehrberger and Holzschuh 2019) or 85.2% in O. bicornis (2,889 out of 3,394 offsprings in Persson et al. 2018).Average brood loss rarely exceeds 20% in wild bees (Minckley and Danforth 2019), which is consistent with our findings.However, in our study, we called failing nests those nests for which the cap remained intact, i.e. with no evidence of emergence.On the contrary, brood loss is most often attributable to natural enemies that feed on provisioned pollen or prey on the eggs or larvae (Minckley and Danforth 2019), and therefore bore or excavate the nest cap either before or after preying on the nest content.
Alternatively, we hypothesize here that most of our so-called failing nests were actually false or fake nests.In a subsequent emergence trial involving nine identical concrete nest boxes, we excavated 45 holes whose cap remained intact throughout the season (10 holes capped with mud and 35 with chewed leaves).Two of them harbored a potentially parasitic bombyliid (Diptera) larvae, one harbored a couple of empty cells, while all the other 42 capped holes (93%) were actually completely empty, containing no brood cell, pollen, larvae or any other organism.This observation points to obvious underestimations of emergence success in our study.It is already known that some females may leave incomplete or abandoned nests (Maclvor 2016).In our study, as many neighboring holes were offered on display, some females may sometimes have capped a wrong hole, or cap several additional holes around their actual nest in order to confuse parasites or predators through prey dilution effect.In line with this fake nest hypothesis, some cavity-nesting bee species may leave an empty cell, also called vestibular, at the outermost position of the nest in order to keep their brood out of reach of potential oviposition by parasitic wasps (Münster-Swendsen and Calabuig 2000; Velez et al. 2017).
Regardless of the underlying biological explanation behind empty nests, it highlights the risk of the accumulation of false or fake nests that could over time saturate nest boxes.This may be viewed as a limitation of continuous use of concrete nest boxes in practice, which would require a regular maintenance scheme to support nesting dynamics, such as a hole cleaning every two to three years, which could also help to limit parasitism and the spread of diseases (Youngsteadt and Favre 2022).
The bee community that nested in concrete nest boxes appears overall not much diversified, with mostly common and generalist species, though one of them is also known to be a foraging specialists.In comparison to the 11 species we recorded, an average of 24 species were found in nest boxes composed of wooden supports or hollow stems in urban or peri-urban environments (Pereira-Peixoto et al. 2014;MacIvor and Packer 2015;Fortel et al. 2016;von Königslöw et al. 2019).Furthermore, the species accumulation curve (Fig. 5) indicates that a ceiling was virtually reached, and richness estimators predict hardly more than a couple of additional species to be expected in the sampled sites.However, there is a high chance that more cavity nesting species may be eventually detected in concrete nest boxes if they were set up in a wider range of sites, including parks or more natural areas that may provide a source of more diverse cavity-nesting species.
The species assemblages that nested in concrete nest boxes did not appear singular from a taxonomic or functional point of view.All the species are considered to be relatively common and do not benefit from a particular conservation status at the European level ("Lower Concern" category of the IUCN -the International Union for the Conservation of Nature).They have a wide and ubiquitous range on French and European territories -except for Anthidium florentinum which is a rather Mediterranean species that was, indeed, collected in one of the southernmost sites of the study.Likewise, the majority of recorded species are polylectic, i.e. foraging on different genera of flowering plants for pollen in an unspecialized way, which is also a functional trait favored in highly anthropic areas (Fauviau et al. 2022).Still, one species, Hoplitis adunca, is known to be oligolectic on the pollen of Echium spp.(Boraginaceae).Food specialization is often associated with ecological fragility because specialist species cannot survive locally without the presence of their preferred host plant (Biesmeijer et al. 2006).The presence of nesting opportunities such as artificial nest boxes may then be viewed as an asset for the maintenance of such local populations.
Two cleptoparasitic bee species (Coelioxys inermis and C. echinata) were also observed in boxes that had been exposed for two or three years, which is indicative of complex species interactions.Cleptoparasitic bees are dependent on the prior establishment of a population of their host species, in this case Megachile centuncularis and M. rotundata, respectively.Indeed, the two cleptoparasitic species were reported in the same nest box than their respective host.The number of cleptoparasitic species may increase over time in nest boxes, although individual numbers are definitely too low in this study to test this hypothesis.The installation of cleptoparasitic bees also testifies to a certain stability of parasitized populations over time if they are maintained despite the cost generated by cleptoparasitism (Sheffield et al. 2013).
Interestingly, not a single individual of the invasive giant resin bee Megachile sculpturalis was recorded during our three years of survey, in spite of its rapid expansion from the south of France northward (Le Féon et al. 2018).Indeed, 16 specimens of this bee species were previously collected in wooden nest boxes by Fortel et al. (2016), close to our southernmost sampling sites.This species appears to be largely dependent on wood as a nesting substrate, and nest boxes made of concrete may thus escape its spread.This species is native to eastern Asia and is spreading rapidly around the world to the detriment of native species (in France since 2008, Vereecken and Barbier 2009).It then became very common in artificial nest boxes made of wood or reeds in anthropized areas (Geslin et al. 2020).Because of its large body size (19-22 mm for males and 21-25 mm for females), it uses cavities with a large diameter, usually between 10 and 12 mm or more if available.It may therefore compete for nesting with large bees such as those of Xylocopa and Anthidium genera in wooden nest boxes (Geslin et al. 2020;Straffon-Díaz et al. 2021).
Perspectives: concrete nest boxes as tools for urban agriculture?
Concrete nest boxes may be useful tools to help maintain local bee populations for urban agriculture purposes, by providing them with perennial nesting opportunities.Owing to their resistance and durability, concrete nest boxes may be integrated to biomimetic buildings, i.e. novel construction or restoration approaches designed to promote local ecosystem services (Blanco et al. 2021).Further studies are, however, needed beforehand to fully apprehend the species-specific reproductive success of bees in this specific nesting substrate.In particular, we recommend to document three main issues that could not be fully addressed in the present study: 1. Refining the focus scale at the nest level, from colonization success to reproductive success.The current study was caried out at the level of nest boxes as a whole, returning broad indicators of colonization or emergence success at the community level.This, however, precluded fine descriptions of species-specific preferences of nesting females for particular hole characteristics.High-resolution monitoring on the basis of individual nests would be certainly possible in a more advanced study, e.g. using videorecording, leading to thorough measurements of reproductive success sensu stricto (i.e.offspring size per nest or per nesting female).2. Comparing the attractivity of nest boxes made up of different materials.Thorough comparisons with other nesting substrates would be advisable, including wood and other types of mineral materials.Although concrete nest boxes successfully attracted cavitynesting females, it is still unclear whether wooden alternatives would perform better or attract different bee species.This should be coupled with simultaneous assessments of thermal properties of holes with regard to the physiological tolerance of adult bees and brood.3. Evaluating predators and parasitic loads.Wooden nest boxes may promote local concentrations of brood parasites and predators.It would be advisable to assess whether a similar risk arises in concrete nest boxes.Monitoring the prevalence of pathogens on larvae is admittedly not straightforward in concrete cavities.One may place rolled paper inside holes prior to nesting, in order to subsequently remove nest contents without damaging the brood.Meanwhile, regardless the material, further studies should assess the possibility to dilute the risks of parasitism by varying nest box availability, accessibility, and distribution in the neighborhood.
Subject to clarification of these points, concrete nest boxes have the potential to promote local populations of some cavity-nesting solitary bees, with positive implications for urban agriculture as well as public outreach in urban areas.Interestingly a large European metaanalysis of urban bee surveys (Fauviau et al. 2022) revealed that O. bicornis, O. cornuta, A. manicatum, and A. florentinum were amongst the most frequently reported species in cities.Those species were also reported from our emergence survey, suggesting promising applications for concrete nest boxes in urban agriculture plots like green roofs or community gardens where wild bees are noticeably diversified (Hofmann and Renner 2018;Kratschmer et al. 2018;Baldock et al. 2019).Also these polylectic bees may contribute to the pollination of a wide range of entomophilous cultivated plants.
Urban bee communities are not random samples of wild bee communities found in surrounding natural areas.Cities act as an 'ecological filter', being less unfavorable to aboveground cavity nesting bee species than to below-ground ones (Fauviau et al. 2022).Artificial nests in remote urban agricultural plots such as green roofs may contribute to promote local above-ground wild bee populations, particularly when green spaces are more abundant in the surrounding areas (Maclvor 2016).This would be in line with a more global approach of biomimetic urban planning at the neighborhood scale (Blanco et al. 2021).A network of concrete nest boxes may be embedded in biomimetic building projects, which are to date mostly designed to promote vegetation-based ecosystem services, but may also consider fauna and habitat management schemes in the future (Blanco et al. 2022).
Fig. 1
Fig. 1 Example of concrete nest boxes placed in a planter.(A) Nest boxes display 23 holes of 6 mm in diameter, 11 holes 8 mm, 10 holes 10 mm and 12 holes 12 mm in diameter.They are placed either in the lower or the higher position (holes at 31-47 cm and 49-65 cm from the ground, respectively).(B) Holes capped with mud indicate the presence of potential nests of mason-bees like Osmia spp.(C) Nesting activity of Anthidium sp. in concrete cavities (photo provided by E. Salles, Vicat).
Fig. 5
Fig. 5 Cumulative bee species richness as a function of collected individuals from the 29 nest boxes that have entered the emergence survey.The extrapolated part of the curve (dotted line) does not predict any increase of species richness from the 11 surveyed sites
Table 1
Results of the binomial GLMMs testing for the effects of concrete nest box characteristics on hole occupancy.Details of the minimum adequate occupancy model are shown first, with hole diameter as a continuous explanatory variable and box position and orientation as bimodal categorical variables (focus modalities are given in parentheses).The two-way interactions noted '×', as well as interannual trends, are further depicted in Fig.3.In a second part, the minimum adequate model was implemented with two candidate environmental (extrinsic) variables, indicating the presence or absence of urbanized or natural areas in the close vicinity of the planters.Models are based on 7866 individual hole occupancy (occupied/unoccupied) observations reported from 14 planters in 11 surveyed sites (in total 19 different placement combinations regarding orientation and extrinsic variables).Each planter supported four concrete nest boxes and was surveyed for two to three consecutive years n = 5, range [60.0%; 100%]) compared to boxes collected after three seasons of exposure(62.6± 13.7%, n = 12, range [38.5%; 83.3%]
Table 2
Amiet et al. (2004) of emergence surveys by species.Results include global species occurrence data in concrete nest boxes (individual numbers, relative abundance in % and number of occurrences out of the 11 study sites) and species life history traits related to body size, phenology, pollen specialization (lectism) and nesting habits.Except when otherwise stated, body size data come fromAmiet et al. (2004)and other life history traits from Scheuchl and Willner (2016) Species | 2023-10-11T15:15:12.558Z | 2023-10-07T00:00:00.000 | {
"year": 2023,
"sha1": "a2a62cc5ae1f60e792588d174bce17e377ce2d54",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s10531-023-02719-3.pdf",
"oa_status": "HYBRID",
"pdf_src": "Springer",
"pdf_hash": "67de55865902ab9839108224fc4759e35e67953d",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": []
} |
226237842 | pes2o/s2orc | v3-fos-license | Design of an ELC resonator-based reusable RF microfluidic sensor for blood glucose estimation
Design of a reusable microfluidic sensor for blood glucose estimation at microwave frequencies is presented. The sensing unit primarily comprises a complementary electric LC (CELC) resonator, which is made reusable by filling the test sample in a glass capillary before mounting it inside a groove cut in the central arm of the resonator. The use of glass capillary in the present situation to contain the blood sample actually eliminates the possibility of any direct contact of the sensor with the test sample, and hence wards off any coincidental contamination of the sensor. Usage of the capillary provides additional benefits as only microliters of the sample are required, besides offering sterile measuring environment since these capillaries are disposable. The capillary made of borosilicate glass is highly biocompatible and exhibits exceptionally high chemical resistance in corrosive environments. Apart from reusability, the novelty of the proposed sensor also lies in its enhanced sensitivity which is quite an essential factor when it comes to the measurement of glucose concentration in the human physiological range. The applicability of the proposed scheme for glucose sensing is demonstrated by performing RF measurements of aqueous glucose solutions and goat blood samples using the fabricated sensor.
At microwave frequencies, blood glucose estimation has been carried out by various non-invasive and invasive methods. Non-invasive methods involve blood glucose estimation from the fingertips or earlobes using patch resonator, spiral microstrip resonator, ultra-wideband antenna, bandpass filter or spatially separated split ring resonators [11][12][13][14][15] . Even though non-invasive methods appear promising and convenient, they are associated with high degrees of unpredictability due to the variations in the skin thickness, applied pressure and fingerprints 16,17 . Invasive methods are, therefore, the more popular means of blood glucose monitoring at microwave frequencies.
The quantity of test sample adequate for observing acceptable sensor responses in the RF and microwave frequency range is very critical from the practical point of view. Earlier works have used Petri dishes, cuvettes and large containers for holding the sample but the necessity of having large quantities of blood and oversized sample holder limits their practical applicability [31][32][33] . Sample droplets may also be directly placed on to the sensor's sensing region for clinical diagnostic applications 34,35 . Of late, microfluidic channels are extensively used for conveying the test sample to the sensing unit 36,37 . Such channels not only guide the samples efficiently to the sensing region but also the quantity of samples required is very less, almost as less as a few microliters or nanoliters. The microfluidic channels may be made of biocompatible materials such as glass 20,38 , silicon 19 or polydimethylsiloxane (PDMS) [28][29][30]39 . Eventhough the state-of-the-art self-monitoring devices can work with blood sample volumes as low as 1 µl, majority of the meters, close to 71%, displayed incorrect readings when tested with this supposedly sufficient but still limited sample volume 40 .
Sensor reusability is another factor that must be deliberated while designing sensor systems. When the test sample comes in direct contact with the sensing region, even after flushing out, the sample leaves an imprint on the channel as well as the sensor that is capable of altering further measurements and the data fidelity 34,41 . If closed carriers such as glass or silicon capillaries are used for carrying the test sample, the error that would have arisen due to the remnants left on the sensing regions could be obviated. Even so, the conventional microfluidic channels would still bear the history of the previous samples and interfere with the measurement process. The proposed sensing scheme thus puts forward the novel idea of using disposable glass capillaries in lieu of conventional microfluidic channels for ensuring error-free and faithful data acquisition, besides guaranteeing sterilized measuring environment and ease of handling. Considering the commercial aspect, even though the state-of-theart glucose sensors have moderately low fixed costs, their variable costs in the form of lancets and test strips are exceedingly high, making them unaffordable for a long term use. Whereas, in the case of microwave sensors, despite the fixed costs being comparable to that of the commercial glucometers, the variable costs, incurred for the disposable glass capillaries, are negligible, making them a very economical alternative. Another factor that has encouraged to seek an alternative to the modern-day glucose monitoring devices is their limited shelf life. The glucose sensing strips work on complex biochemical reactions and consequently, the finite lifetime of chemicals may lead to strip failures. The proposed scheme using the microwave resonators is devoid of any biochemical treatments and hence there are no major factors that would limit the lifetime of these sensors.
There are quite a few other factors that affect the accuracy of the state-of-the-art glucose sensors and the proposed microwave microfluidic sensors equally; the most important of them being the patient factors such as blood sample composition and pharmacologic state. Almost every state-of-the-art blood glucose monitoring system measures the glucose concentration of a complex composition of blood, which is then calibrated against plasma glucose. Diversity in the cellular, molecular and salt content of the sample, therefore, has an effect on the measurement. An approach to implement the proposed microwave sensing modality free from the inaccuracies due to composition, to a certain extent, would be to use plasma samples in place of whole blood samples. However, this could be a viable alternative in a laboratory testing environment, but inexpedient for a self-monitoring device, which is the ultimate design intent of the proposed sensor. Hematocrit levels may interfere with the readings of a glucometer as the glucose content in the cells are different from that of the plasma 42 . In the commercially available glucometers the blood cells can alter the electron flow or the enzymatic reactions. This situation does not occur in glucose sensing with the proposed sensor. Certain substances in the blood that occur naturally or present during diseased states such as triglycerides, oxygen, uric acid, acetaminophen and ascorbic acid are found to affect the accuracy of electrochemical blood glucose monitoring systems due to the way they react with the mediator enzymes and electrodes. These factors prima facie do not interfere with the proposed measurement procedure using microwave sensors, but neither the proposed sensor nor any state-of-the-art glucometers have the capability to infer whether a wrong meter reading was caused because of these factors.
In this context, the design of a reusable microfluidic sensor for monitoring blood glucose at microwave frequencies is presented. The sensor is inspired by the metamaterial structure of complementary electric LC resonator. The central arm of the resonator is modified to form a cavity by carving in a groove deep into the substrate and then coating metal on the sidewalls for enhancing the capacitance. The test sample, collected in disposable Borosilicate glass capillary, is then placed into the cavity and the responses are observed. With the exception of a handful of work on human serum 34,43 and pig blood samples 19,31,32 , a vast majority of the studies in the research area of microwave-assisted glucose sensing relies on using aqueous glucose solutions as the test sample due to the intricacies in using real blood samples for the experimental validation. In this work, aqueous solutions of glucose, as well as blood samples from goat, are used for the study. The dependency of the glucose concentrations on the sensor's response could be translated to predictable relationships with the help of mathematical models based on the resonant frequency shifts 44 . The proposed measuring strategy may be endorsed as a primary screening method for blood glucose monitoring.
Design procedures
Design of microfluidic channel. While dealing with practical measuring scenario, blood glucose estimation requires specific precautionary measures to be followed for improving the accuracy 41 . Careful handling of the samples and preventing their exposure to contaminated environment is the foremost of all. Secondly, a major detrimental factor that limits the usage of microwave planar sensors for glucose sensing is the inability to ensure independent measurements. There are two predicaments that have to be dealt with when reusing the same equipment for the measurement. First, is associated with the reusability of the sample holder, i.e., the microfluidic channels conveying the glucose or blood samples have to be meticulously cleaned or sterilized after each measurement as the sample leaves its signature in the form of remnants in the tract/passage walls which affect further measurements. Second, is associated with the sensor; since the sensor comes in direct contact with the test sample, the sensor too has to undergo frequent sterilization procedures. In this scenario, the use of disposable sample holders is suggested, thus ensuring a sterile environment for testing, keeping the channel and sensor contaminations at bay. Capillaries made of laboratory-grade Borosilicate glass, having length 30 mm, outer diameter 2.2 mm, thickness 0.2 mm, and approximate maximum capacity of 95 µl, are used as sample holders in this work. They can be sealed from both the ends after filling the sample, thus isolating from further external contact. Hence the use of disposable capillaries eliminates the need for sterilizing the holder and the sensor, and expedites the measurement process. Furthermore, it is non-viable to use large volumes of blood even if it means improved sensor performances with more sample volume to interact with. This is another reason for endorsing the use of microfluidic capillaries. Due to their availability in sterilized state, convenience and cost-effectiveness, disposable sample holders may be envisioned as a solution of the future, fast replacing the conventional microfluidic channels used in microwave sensing schemes.
Sensor design and sensitivity analysis. The sensor comprises a transmission line and a resonator
etched on the ground plane. The test sample is placed on top of the resonator structure, which interacts with the electric field coupled from the host line. The intensity of the electric field in the sensing region of the resonator plays a paramount role in determining the sensitivity of such planar sensors. The proposed sensor design has a CELC resonator in conjunction with an incised groove as the basic sensing unit. The sensor is designed progressively from a conventional rectangular complementary split ring resonator (CSRR). It is a well-established fact that the strength of the electric field appears the highest in the arm of the CSRR, opposite to the resonator gap 45 . The test sample, therefore, has to be suitably placed in the arm opposite to the resonator gap for maximum field coupling. Thus, with the objective of maximizing the sensitivity, starting from the basic conventional rectangular CSRR based sensor, the evolution of the proposed sensor design is presented in this section. The sensors are designed on an 80 mm wide and 100 mm long Rogers RT5880 substrate of thickness 3.175 mm, relative permittivity, ε r = 2.2 and loss tangent, tan δ = 0.0009 . The copper metallizations are 35 microns thick, and the 50 Ω transmission line is 9.6 mm broad. The full-wave simulations are carried out in CST Microwave Studio. The sensitivity may be increased forthrightly by reducing the CSRR structure dimension; however, this strategy cannot be implemented in this design due to the size restrictions. Decreasing the width of the sample holding arm of the CSRR structure increases the gap capacitance and hence the intensity of the electric field, but cannot be reduced lesser than 3 mm as the capillary has to be placed there. Likewise, the length of the CSRR structure cannot be reduced lower than 30 mm, a constraint imposed by the capillary length. Thus in all the designs to follow, the optimizations on the dimensions are carried out within these specified limits. Design 2: CELC resonator. In order to increase the electric field concentration in the broader arm, intuitively, the same CSRR design may be superimposed with its mirrored counterpart, giving rise to the CELC resonator of Fig. 1d. The sensor has a resonant frequency of 2.31 GHz. Although the sensor is more sensitive than the previous design due to the enhancement in the average field intensity, the design needs further improvement since the capillary has a circular cross-section and hence the sensor would be loaded at just one and only one point which does not allow for sufficient field interaction. Rectangular capillaries may be employed as an alternative; however, the difficulty involved in custom manufacturing such capillaries of diminutive sizes would ensue additional cost.
Design 3: CELC resonator with embedded cavity and metalized sidewalls. The CELC resonator has improved sensing capability compared to the conventional CSRR structure. Nonetheless, the constraint in reducing the width of the sensing arm limits any further attempt to intensify the electric field strength and hence the sensing capability beyond the achieved standards. Alternatively, the sensing arm may be visualized as a parallel plate capacitor of plate area defined by the length of the central arm (31.73 mm), width given by the thickness of copper metallization (35 µm); and plate separation equal to the width of the arm (2.92 mm). The capacitance of the parallel plates and consequently, the field concentration may be amplified by increasing the plate area alone, as there is limitation in reducing the plate separation, as discussed earlier. The plate area is therefore increased, as shown in Fig. 2a, by extending the ground plane as the sidewalls of a 2 mm deep groove 46 . The groove is made by carving out the substrate in the central arm of the CELC structure. Figure 2b shows the electric field intensity distribution at the sensor's resonant frequency of 1.9 GHz; the glass capillary containing the sample, placed in the sensing region can also be seen. The glass capillary is modeled as a Pyrex glass container of relative permittivity 4.82 and loss tangent 0.0054. A comparative plot of the electric field intensity in the active sensing region of all the discussed designs is presented in Fig. 2c. The abscissa shows the distance along the length of the sensor with the midpoint of the sensing region as the origin. For all the designs, the field is calculated at the center point of the sensing region, i.e., at y = 0 , along the plane that passes through the face center of the sidewalls, i.e., at a depth of z = 0.0175 mm for designs 1 and 2, and z = 1 mm for design 3, measured from the surface of the sensor. As evident from the electric field distributions of the figure, design 3 has the densest concentration of electric field, i.e., 6634 V/m, at the center of the sensing region ( x = 0 ) as compared to the previous configurations of design 1 and design 2, having field intensities of 3720 V/m and 4436 V/m, respectively. Also, in design 3, the field intensity is reasonably uniform throughout the sensing region.
To analyze the sensitivity across all the sensor variants, the relative permittivity, ε r , of a lossless test sample placed over the central arm of the resonator, is varied over a wide range and the resultant loaded resonant frequency, f r , is observed. The normalized frequency shifts, f 0 − f r f 0 , where f 0 is the sensor's unloaded resonant frequency, are then compared to ensure a fair assessment as it is known that a sensor's higher resonant frequency in itself could be a partial contributing factor towards achieving more significant frequency shift and thus higher sensitivity 47,48 . Figure 3a shows the unloaded resonant frequencies of the sensors. The proposed sensor design, design-3, possesses exceptionally high Q-factor of 329.23 as opposed to design-1 and design-2, having Q-factors equal to 34.94 and 26.06, respectively; which makes the proposed design-3 extremely suitable for characterizing low loss samples. The normalized frequency shifts plotted as a function of the test sample's relative permittivity, corresponding to all the sensor designs are shown in Fig. 3b. Figure 3b clearly demonstrates the superior sensing www.nature.com/scientificreports/ capabilities of the newly proposed design, compared to the conventional CSRR and CELC designs, which could be attributed to the higher field concentration, as evident from Fig. 2c. Consequently, design 3 is finally chosen to carry out the experiments.
Measurement and results
The designed sensor is fabricated and its applicability in differentiating samples of different glucose concentrations is studied. The photolithographic fabrication techniques adopted in realizing the proposed sensor are illustrated in Fig. 4. In this process, initially, the masks are generated by transferring the pattern onto a photographic film (Fujifilm plotter film HG XPR-7S) in a photoplotter and then developed manually in a tray using Fujifilm QR-D1 developer. At this stage, the film is inspected for defects such as track breakages or short circuits. Meanwhile, the entire substrate, shown in Fig. 4a, is uniformly laminated with a negative photoresist by placing the substrate in between heated feed rollers. The laminated substrate is prebaked at a temperature of 100-120 °C for 10 min in a convection oven to densify the photoresist by vaporizing the coating solvent. After careful alignment of the masks on both sides, the laminated substrate is exposed to ultraviolet (UV) light in a double-sided drawer exposure unit with a vacuum system for transferring the pattern, as shown in Fig. 4b. As a result, the portion of the laminate that is not protected by the mask becomes etch-resistant. The substrate is treated with a www.nature.com/scientificreports/ developer in a spray developing unit where the unexposed part gets dissolved in the developer. The photoresist developer used is sodium carbonate. The entire laminate surface is developed simultaneously with the help of a rotary system present inside a developing chamber. After developing, the copper from the portion of the laminate protected from UV is etched away using ferric chloride copper etchant solution, by hanging it in a foam etching center. The substrate is now ready with the desired pattern, but the photoresist that is remaining on the substrate needs to be stripped away. The stripping is carried out in a stripping cuvette in which the remains of the photoresist are removed by rinsing the substrate in sodium hydroxide solution. Finally, the substrate is dried by placing it in an oven. In order to realize the sensor, firstly, a rectangular patch of dimensions 2.92 × 31.73 mm 2 is etched on top of the Rogers RT5880 substrate that would help in identifying the location where the groove has to be constructed. A rectangular pocket of depth of 2 mm is then cut at this site by removing the substrate using a three-axis high precision CNC milling machine. The substrate with the sensing cavity is shown in Fig. 4c. Next, the residual copper coating of the structure of Fig. 4c is removed and copper-plated anew so as to have a copper layer of uniform thickness (35 µm) all over the substrate, including the interior of the groove. The process described in Fig. 4b is now repeated, this time, with a new mask, having the pattern of the CELC resonator of Fig. 1d. In advance to the UV exposure, the longer faces of the groove are coated with liquid photoresist so that the copper cladding remains only on these faces that form the sidewalls of the groove, after exposure and development using sodium carbonate. The process is illustrated in Fig. 4d; the liquid photoresist coated sidewalls can be seen in the inset figure. The sensor prototype with embedded cavity and metalized sidewalls is shown in Fig. 4e. Subsequently, the mask with the geometry of the microstrip line of Fig. 1a, is then constructed on the top plane to form the complete sensor. Particular attention has to be paid while attaching the SMA connector (part number: R124 403 123, 14.43 long and 12.7 mm wide) with an inner conductor diameter of 1.27 mm onto the 9.6 mm broad microstrip line as there is a high probability of the outer conductor coming in contact with the microstrip line. Thus to stay clear of this uncertainty, the sensor is configured with an allowance of 0.25 mm at the shorter ends of the sensor, where the copper is stripped off the substrate so that the SMA outer conductor and the microstrip line are separated in space.
The experiments are performed using Agilent N5230C vector network analyzer (VNA) in the frequency range of 1-5 GHz. Figure 5a illustrates the fabricated sensor prototype, connected to the VNA. The samples are prepared from freshly collected goat blood from slaughterhouse in purple/lavender-cap BD Vacutainer® spray-coated K3-EDTA tubes that are typically used for whole blood hematological studies. The blood samples in EDTA tubes are shown in Fig. 5b. The samples may be preserved in a refrigerator for up to three days. A controlled amount of d-Glucose is then added to the blood specimens, and samples having concentrations of 100 mg/dl, 200 mg/ dl, 300 mg/dl, 400 mg/dl and 500 mg/dl are prepared. The initial glucose concentration of goat blood was taken into consideration while preparing the samples. The reference sugar level of the goat blood was 88 mg/dl, as measured using OneTouch® SelectSimple™ blood glucose monitoring system. In addition to the blood samples, aqueous glucose solutions are also prepared using d-glucose and deionized water. The samples are then injected into the custom-manufactured Borosilicate glass capillaries, shown in Fig. 5c using a plastic medical syringe. As can be seen from the figure, the ends of the capillary need not necessarily be sealed to prevent the sample from flowing out; instead, the sample stays in place due to the surface tension alone. However, the ends of the capillary may be sealed to prevent external contamination. The sample holders are disposable and may be discarded after use, to ensure that the current data acquisition is not affected by the residual traces of previous samples. The measured results of aqueous glucose and blood samples are presented in Fig. 6. The measured resonant frequency of the sensor in the unloaded condition is 1.71 GHz, deviating slightly from the simulated value, as the metallization and fabrications of the delicate, narrow and shallow components do not match perfectly with the simulated model. However, this deviation does not interfere with the characterization of samples as each resonator is calibrated and modelled based on a measured dataset 45 . Two aspects have to be recognized while monitoring glucose using microwave sensors. First, the glucose concentrations of the test samples do not have a linear correlation with their dielectric properties. Second, the relationship of the transmission responses of the sensor with the samples' dielectric properties is also are not linear. This varies from sensor to sensor and is unique 34 . It is in accordance with this relationship that a sensor is calibrated so that the readings shown are based entirely on the frequency shift of that particular sensor alone. When the empty glass container is placed in the sensing region, the resonant frequency is found to have shifted to 1.6856 GHz. Figures 6a and b, respectively, show the measured transmission responses when the samples under test are aqueous glucose solution and goat blood. While testing the samples of aqueous glucose solution, the sensor is observed to have a sensitivity of 0.0185 MHz/ mg dl −1 . On the other hand, when the blood samples are tested, the sensor is found to be more sensitive, having a sensitivity of 0.056 MHz/mg dl −1 . This reduced sensitivity is anticipated in the case of aqueous samples as the sensor's sensitivity tends to saturate at higher permittivity levels, as evident from the plot of Fig. 3b. It is wellknown that the blood samples have much lower dielectric constant compared to the aqueous solutions, and hence the better distinguishability. The glucose concentrations and the deviations in the frequencies from the unloaded resonant frequency show good quadratic correlation; the curve-fitted plots are presented in Fig. 6c, for both the aqueous and goat blood samples. Each sample was measured six times and the deviations are shown by the non-overlapping error bars for each concentration. The regression equations are given by (1) and (2) for the aqueous glucose and blood samples with the goodness of fit calculated as R 2 = 1 and R 2 = 0.9955 , respectively.
where f a and f b are the measured frequency shifts in MHz corresponding to the aqueous glucose and blood samples, respectively, for a glucose concentration of g mg dl −1 . The measurements are carried out at room temperature of 23 °C. Though temperature is one of the many physical factors that can influence the measurement readings, the ambient temperature variations have had hardly any impact unless the conditions are extreme. Likewise, the sensor performance was not found to have any notable variation with time.
Conclusion
In this work, the design of a reusable microwave microfluidic sensor for monitoring blood glucose concentration is presented. The reusability is perfectly befitting the current global societal trend of reduction of waste materials and promotion of environment-friendly new technologies. The proposed resonator has a CELC geometry and possesses higher sensitivity compared to various other sensor designs of comparable dimensions. The overall sensitivity of the sensor is improved compared to the conventional sensors by extending the metallic walls around the active region into the substrate for increased capacitance and concentrated field intensity. The sensor is able to detect the changes in the glucose concentration of aqueous solutions and real blood samples. The test samples are placed in disposable glass capillaries, isolating the samples from external contamination and thus making the sensor reusable. The most prominent advantage of the proposed scheme is the reusability of the sensor as a result of which the detection becomes extremely fast as the channels and the sensor do not require to be cleaned after each measurement. The method is quite economical and has abundant scope in the characterization of various other biological as well as chemical samples. The study can contribute more towards microwave-based blood glucose monitoring discipline by exploring the possibilities of characterization of healthy blood samples within normal glucose levels.
Received: 1 October 2019; Accepted: 24 January 2020 Scientific Reports | (2020) 10:18842 | https://doi.org/10.1038/s41598-020-75716-z www.nature.com/scientificreports/ Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creat iveco mmons .org/licen ses/by/4.0/. | 2020-11-04T05:04:21.123Z | 2020-11-02T00:00:00.000 | {
"year": 2020,
"sha1": "4bdae25082783521410da76ecbdfbe07de7782bd",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-020-75716-z.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "4bdae25082783521410da76ecbdfbe07de7782bd",
"s2fieldsofstudy": [
"Engineering",
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Materials Science"
]
} |
265077756 | pes2o/s2orc | v3-fos-license | Working Anytime and Anywhere -Even When Feeling Ill? A Cross-sectional Study on Presenteeism in Remote Work
Background Working despite feeling ill – presenteeism – is a widespread behavioral phenomenon. Previous research has shown that presenteeism is influenced by various work-related and personal factors. It's an illness behavior leading to a range of negative but also positive consequences. Due to coronavirus disease 2019 (COVID-19) pandemic, remote work has become the “new normal” for many employees. But so far, little is known about presenteeism in remote work. This study aims to investigate presenteeism in remote work by looking at the extent of remote presenteeism, differences to presenteeism in on-site work, and associated factors. Methods A nationwide cross-sectional online survey was conducted in Germany with N = 233 participants. Data were analyzed using descriptive statistics, t-tests, and correlation analysis. Results The results reveal that presenteeism is prevalent in remote work x̅ = 4.13 days (Md = 3; D = 2; s = 4.95). A low ability to detach from work (r = -.17; p = .005) and low supervisor support (r = -.14; p = .02) is associated with more remote presenteeism days. Remote working conditions seem to facilitate presenteeism. Conclusion This study provides empirical insights into a subject area of great societal relevance. The results show that awareness should be raised for presenteeism in remote work. It should be regarded as a behavior that can be functional or dysfunctional, depending on the individual situation. Supervisor support and detachment should be fostered to help reduce dysfunctional presenteeism. Promotion of health literacy might help remote workers to decide on a health-oriented illness behavior. Further research is vital to analyze to what extent and under which circumstances presenteeism in remote work is (dys)functional and to derive clear recommendations.
Introduction
Presenteeism is a widespread behavioral phenomenon [1].In line with the European research line, presenteeism is defined as the behavior of working despite illness.It thus represents the alternative behavior to illness-related absence from work [2].Both behaviors, presenteeism, and absenteeism, result from conscious decision-making processes when facing ill health [3].A representative survey of the German working population shows that 65% of the respondents had worked at least once within a year despite feeling ill [4].But what are the consequences of working despite illness?Presenteeism can be seen as adaptive behavior, which can be functional or dysfunctional [5].In a health-oriented work environment, presenteeism can have positive effects.For example, it can foster workplace inclusion of employees with chronic conditions and is therefore considered functional [6].At the same time, presenteeism also leads to a variety of negative effects.For example, frequent work despite illness is associated with a reduction in overall health [7].In this case, presenteeism is a rather dysfunctional behavior [5].
The reasons for working ill are by no means monocausal, but multifactorial and multilayered.They can be subdivided into three main categories: personal factors, work-and organizational factors, as well as structural and environmental factors [8].
Due to COVID-19 pandemic and the resulting restrictions, supervisors and employees in many areas were forced to redesign Henrike Schmitz: https://orcid.org/0000-0002-8973-9885;Jana F. Bauer: https://orcid.org/0000-0002-4357-3832;Mathilde Niehaus: https://orcid.org/0000-0002-4283-5407established structures and processes.Remote work 1 has become the 'new normal' for many employees [9].Due to its widespread usage, a new debate on remote work has been politically sparked.In the political discourse, apart from the current goal of infection control, the focus is primarily on the benefits of remote work, such as administrative cost savings for companies [10] and more flexibility to balance work and other life domains [11].However, several studies show that working remotely also leads to disadvantages that need to be considered and minimized, especially concerning employee health.For example, working remotely can increase psychological stress and reduce the ability to detach from work during leisure time [12,13].
Regarding presenteeism, many factors, that lead people to work when ill, change in a remote working environment (e.g., leadership) [14].Initial evidence suggests that presenteeism is also prevalent in remote work [15,16].However, presenteeism in remote work is an understudied phenomenon.Some studies imply, that employees recognize working remotely as a good option to work despite illness [17,18].Thus, expansion of remote work potentially exacerbates the problematic-but also positive effects of presenteeism.
To address the research gap on presenteeism in remote work, this study aims to examine the phenomenon of remote presenteeism more closely.It investigates different research questions and hypotheses, only part of which is presented in this article. 2esearch question 1 will first examine the existence of presenteeism in remote work in descriptive terms to describe the extent and relevance of the topic.
RQ1: To what extent do employees show presenteeism in remote work?
Working remotely is associated with the reduced ability to detach from work [20].Detachment describes the competence to mentally distance oneself from work during leisure time [19].In onsite work the association between presenteeism and detachment has rarely been explored.Initial results on presenteeism in remote work indicate that a high degree of detachment is associated with less presenteeism [21,22].However, since the association between detachment and presenteeism has not yet been sufficiently investigated, research question 2 is to be examined.
RQ2: Is presenteeism in remote work associated with reduced detachment from work?
Even though the research on presenteeism in remote work is relatively new, a common theoretical foundation can be established from the singular strands of investigation.Research shows that working remotely is associated with reduced supervisor support [23].At the same time, various studies find a significant, negative association between supervisor support and presenteeism [24,25].Only one study was found that investigates supervisor support in association with presenteeism in remote work.It detected a significant negative association [18].In contrast, indirect work control was found to be positively associated with remote presenteeism [16].Against this background, hypothesis 1 will be tested.
H1:
The more employees feel supported by their supervisors in remote work, the less presenteeism they show.
Due to insufficient research and theoretical frameworks, no clear statements can be made about the associations and differences between presenteeism in remote work and on-site work.Some studies could not find significant differences between presenteeism in remote and on-site work [26,15].At the same time, there is initial empirical evidence and additional theoretical work [3] implying higher rates of presenteeism in remote work compared to on-site work [27,22].Furthermore, research indicates that conditions in remote work seem to facilitate presenteeism [28].Based on these studies, hypothesis 2 will be tested.
H2: Employees show an increased tendency for presenteeism in remote work compared to on-site work.
The main focus of this study lies on factors associated with presenteeism that can be modified (detachment and supervisor behavior).Yet, research on presenteeism in on-site work usually also includes aspects that are more static, like company characteristics.Various reviews conclude that employees in large enterprises engage in presenteeism more frequently than employees in small and medium-sized enterprises [29,30].To analyze whether this difference is also valid in remote work research question 3 is to be examined.
RQ3: Do employees in large enterprises show more presenteeism than employees in small and medium-sized enterprises when working remotely?
In this study, we aim to address the gap in knowledge about presenteeism in remote work by examining the prevalence, associated factors (detachment and supervisor support), differences in the location-based tendency for presenteeism (remote vs. on-site work), and differences in remote presenteeism days due to enterprise size.The stated research questions and hypotheses are examined in a cross-sectional design.
Procedure
A nonexperimental cross-sectional study was conducted.The data were collected via the online survey tool SoSci Survey.The questionnaire was pretested and revised accordingly.Data were collected nationwide in Germany in December 2020.We used different methods to disseminate the survey to the target population, remote workers in Germany (social media, snowball sampling, and recruitment via gatekeepers).The study data are, thus, collected from a nonrandom opportunity sample.Due to the distribution channels, it was not possible to calculate a response rate.
Measures
For this study, the research team developed a questionnaire by external expertise from a presenteeism researcher.The questionnaire consisted of established scales and self-developed items.
Presenteeism
Following recommendations by an international group of presenteeism researchers [3] the total presenteeism days were surveyed with an open ended question based on Demerouti et al. [31] ("On how many days did you work remotely in the last3 months although you felt ill3 ").The measurement of presenteeism and all related aspects referred to a retrospective memory period of 3 months as done by Baeriswyl et al. [32], and Wang et al. [33].The subjective perception of health/illness was chosen as it is the crucial issue for the decision-making progress [34,3].Participants are rated as presentees in case of one or more presenteeism days. 4
Detachment
The employees' ability to detach from work was operationalized using the validated scale by Sonnentag and Fritz [35].It consists of four items (e.g., "At the end of the day I don't think about work at all.") and is measured on a 5-point Likert scale (1 ¼ strongly disagree to 5 ¼ strongly agree).
Perceived supervisor support
Perceived supervisor support was measured using the validated scale by Rusbasan [36].Perceived supervisor support is measured via five subscales: Emotional Support, Appraisal Support, Resource Support, Outside-of-Work Support Work, and Career Support.The subscales comprise three items each and are measured using 7point Likert scales.To keep the questionnaire within a reasonable time frame, the scale Career Support was excluded.
Location-based preference for presenteeism
The location-based preference for presenteeism (remote vs. onsite) was surveyed using two separate items.These are derived from the results of the qualitative survey by Dahlke et al. [37].Using bipolar rating scales (1 ¼ much easier to 5 ¼ much more difficult), first the comparison of the perceived difficulty of working despite feeling ill was assessed ("Compared to on-site work, working remotely when I feel ill is .").Second, the location-based comparison of the decision when feeling ill was surveyed ("Compared to on-site work, the decision not to work remotely when I feel ill is .").
The questionnaire was designed to take about 10 minutes.A pretest was carried out (n ¼ 8) to improve the validity of the questionnaire.The results were used to optimize the questionnaire mainly with regard to the comprehensibility and clarity of the items.Two inclusion criteria were set for participation: 1) average share of at least 60% remote work per week during the last 3 months.This criterion was set to minimize the risk of a recall bias when working just a few hours remotely.At the same time, a pragmatic approach had been chosen by not being too restrictive to reach a sufficiently large sample.2) Feeling of illness at least once in the last three months.This criterion was set as without a feeling of illness, there is no choice to be made about presenteeism.
Participants
A total of 595 data sets could be generated, of which 300 data sets met the required inclusion criteria.After checking the data for missing values, and data quality, N ¼ 233 participants remained in the sample.The sample only includes employees, not self-employed workers.The sociodemographic characteristics of the participants are shown in Table 1.
Data analysis
Descriptive, correlational, and reliability analyses and t-tests were performed using SPSS 22 software.For significance tests, an error probability of 5%t was assumed.The reliability of all established scales was examined using Cronbach's a.All scales can be rated as good to excellent in their internal consistency according to the classification by Blanz [39] (a ¼ .82-.93).Regarding the selfdeveloped items, confirmation of hypothesis 2 can be assumed if working remotely is rated significantly easier when feeling ill.At the same time, the decision against presenteeism in remote work must be rated more difficult.To analyze research question 3, a dummy coding of the item enterprise size was undertaken (1 ¼ micro-, small-or medium-sized enterprise; 2 ¼ large enterprise).
After data preparation, the distributions of interval-level scales were checked for outliers and normality.A z-transformation was performed.Values greater than 3.29 or less than -3.29 were identified as outliers.Since the outliers can be attributed to a few cases, they were winsorized.nonparametric procedures (RQ2 & H1: Spearman correlation, H2: one-sample Wilcoxon-test) were carried out for items that were not normally distributed to validate the results of parametric procedures (RQ2 & H1: Pearson correlation, H2: one-sample t-test).Since nonparametric analyses delivered congruent results in all analyses, they are not reported separately.
RQ1. : Extent of presenteeism in remote work
The descriptive analysis of the total presenteeism days showed that 87% of the respondents had worked remotely at least one day during the last three months despite feeling ill.On average, presenteeism in remote work occurred on x ¼ 4.13 days (Md ¼ 3; D ¼ 2; s ¼ 4.95).The frequency distribution can be seen in Fig. 1.The zvalues of the distribution display five values as outliers.The additional descriptive analysis of the winsorized values shows that the mean is not substantially distorted by these outliers.The right skewness and steepness of the distribution can be improved by winsorizing, but not eliminated.
RQ2. Association between presenteeism in remote work and detachment from work
The parametric correlation of the total presenteeism days (winsorized) with the detachment scale (n ¼ 230) was tested one-sided due to the one-tailed research question.It showed a significant negative association (r ¼ -.17; p ¼ .005).
H1. Association between remote presenteeism and perceived supervisor support
The parametric correlation of the total presenteeism days (winsorized) with the supervisor support scale (n ¼ 230) was tested onesided due to the one-tailed hypothesis and showed a significant negative correlation (r ¼ -.14; p ¼ .02).
H2. Presenteeism in remote vs. on-site work
The evaluation of this hypothesis was carried out using descriptive statistics as well as one-sample t-tests.The theoretical mean and median, on which the analyses were based, were three (on a five-point likert-scale) and postulated no location-related difference (equally easy or difficult).
Of the 201 participants, 85% rated working remotely when feeling ill as easier or much easier compared to on-site work (x ¼ 1.93; Md ¼ 2; D ¼ 2; p ¼ .72).Only 3% of the sample reported finding it more difficult or much more difficult (see Fig. 2).The analysis of the z-values revealed n ¼ 1 outliers for this distribution.The one-sample t-test revealed that the actual mean is significantly different from the theoretical scale mean (t (200) ¼ -21.13; p < .001;d ¼ e1.49).The difference in mean values is -1.07 (95%-CI[e1.17,e0.97]).
Regarding the location-based comparison of the presenteeism decision when feeling ill, 65% of the participants stated that the decision against presenteeism is more difficult or much more difficult in remote work (x ¼ 3.69; Md ¼ 4; D ¼ 4; p ¼ .91).Nine percent found this decision easier or much easier in remote work (see Fig. 3).The analysis of the z-values revealed no outliers for this distribution.The one-sample t-test showed that the actual mean deviates significantly from the theoretical scale mean (t (200) ¼ 10.71; p < .001;d ¼ .76).The difference in mean values is 0.69 (95%-CI [0.56,0.81]).
Discussion
In this study, we aimed to analyze presenteeism in remote work and its associated factors.We also examined location-based differences of presenteeism in remote compared to on-site work and group-based differences in presenteeism days between employees in small-and medium-sized enterprises compared to large enterprises.For this, we conducted an online questionnaire in a cross-sectional design.The results show that presenteeism is prevalent in remote work.Employees seem to decide for presenteeism more easily in remote work than in on-site work.Detachment, as well as supervisor support, are detected as associated factors of remote presenteeism.Enterprise size was not found to affect presenteeism days significantly in remote work.
The extent of presenteeism in remote work
The results of the present study indicate that presenteeism is a widespread phenomenon in remote work.An overwhelming majority of the sample worked at least one day within the last 3 months despite feeling ill.The measures of central tendency and dispersion indicate that the distribution is broad and shifted to the left, indicating many answers in the lower value range.This indicates that many participants only showed a small number of presenteeism days, and the mean may be affected by fewer cases toward the right end of the distribution.
Since presenteeism is measured using different operationalizations and recall periods, the comparison of results is challenging.With comparable operationalizations, representative studies in the German on-site working population have identified 65% to 71% of participants as presentees within 1 year [2,40,4].In the present sample, substantially more employees showed presenteeism in only a quarter of the period.The number of presenteeism days x ¼ 4.13 in the present sample is also higher than in previous research [40].This should not be overinterpreted due to the already mentioned, broad, and shifted distribution.Many factors could have impacted the prevalence of presenteeism in the present study.First of all, the self-selection of the sample may have caused a bias that can lead to an overestimation of remote presenteeism.Due to the increased mental strain during the pandemic [41], it is possible that more employees felt a worsening of their mental health.Thus, the probability of presenteeism may be increased as the base rate of impaired health might be higher.Aspects such as job insecurity with an uncertain labor market during the pandemic could also cause the prevalence to be overestimated.All in all, the available data shows that presenteeism in remote work is prevalent, but the analyzed extent must be interpreted with caution.
Remote presenteeism and detachment
The explorative analysis of the association between detachment and presenteeism identified a significant, negative correlation in the expected direction.Accordingly, reduced detachment is associated with an increased number of presenteeism days.The effect size of the association is small, according to Cohen [42].The results are in line with the qualitative study by Eddleston and Mulki [21] and Strasser et al. [22].Due to the small effect size, it must be assumed that reduced detachment is one aspect among many others that can be associated with remote presenteeism in the present sample.
Remote presenteeism and perceived supervisor support
For hypothesis 1, which postulates a negative association between supervisor support and presenteeism in remote work, the alternative hypothesis is accepted.The present survey provides evidence that more support from supervisors in remote work is associated with fewer presenteeism days.Since the effect size of the correlation is small, supervisor support must be interpreted as one component among others.Studies that identified significant, negative associations between presenteeism and supervisor support in on-site work also found small effect sizes [43,44,24,45,46].Accordingly, the determined correlation is in line with current findings regarding the direction and strength of association.
Presenteeism in remote vs. on-site work
For the postulated location-based preference for presenteeism in remote work the alternative hypothesis is accepted.Participants rated working remotely despite feeling ill to be significantly easier than working ill on-site.The effect size can be classified as large, according to Cohen [42].At the same time, employees rated the decision against presenteeism to be significantly more difficult in remote-compared to on-site work.The effect size corresponds to a medium to large effect.
These findings indicate that employees might be more prone to presenteeism in remote than on-site work.That is in line with current research by Walter et al. [47], showing that remote workers report significantly more presenteeism than on-site workers.Furthermore, it shows that remote work seems to facilitate presenteeism, as also seen in research conducted by Ruhle and Schmoll [48].The fact that employees find it easier to work ill in remote work settings suggests that remote work may favor functional presenteeism.Remote presenteeism could, therefore, make it possible to continue performing without worsening the health status.However, the fact that the decision against presenteeism is more difficult at the same time dampens the positive view.This result indicates that employees may also decide in favor of presenteeism when their health status does not allow it.
Remote presenteeism in small-and medium-sized-vs. large enterprises
In this study, no significant group differences between the presenteeism days of remote employees in small-and mediumsized enterprises compared to large enterprises could be found.The sizes of the analyzed groups differed remarkably (small-and medium-sized enterprises n ¼ 82, large enterprises n ¼ 147).Therefore, it was checked whether the power was sufficient to determine a potential significant difference.For this purpose, a post-hoc power analysis was performed using the software G-Power 3.1.With a result of 1-ß ¼ .99, the power was good.Therefore, detecting a significant group difference in the data was statistically possible.
This result is not consistent with former on-site work research, which could detect more presenteeism days in large enterprises [30,29].This might indicate that enterprise factors are not as relevant in remote work as in on-site work.An explanation could be that working conditions might converge more in remote work.Furthermore, the direction of the mean difference leads in the same direction as in previous research for on-site work.For further interpretation, more research is needed.
Limitations
The conceptual limitations can, first and foremost, be seen in the nonexperimental cross-sectional design, which does not allow any conclusions about causality.However, an experimental study design is unsuitable for the present research question, since presenteeism is a variable that can hardly be manipulated.For this reason, nonexperimental surveys are currently the most common design in studies on presenteeism [3].
Another limitation concerns the sample structure.The representativeness of the sample is questionable.Besides the sample size, it cannot be ruled out that the results are distorted by a (self-) selection bias leading to an overestimation of the prevalence of remote presenteeism.Sex or gender were not surveyed in this study.The generalizability of the results to all genders is therefore questionable.In addition, there is insufficient knowledge about the target population (people working remotely in Germany) which doesn't allow a sufficient analysis of possible self-selection effects.Yet some studies give indications about the population and allow comparisons.The sample (see Table 1) shows similarities to the (known) target population looking at: enterprise size [49].remote working experience [50] supervisor support [51] and detachment [52] Compared to the general working population in Germany, the sample shows similar characteristics regarding health status [4], the proportion of chronic conditions [53], and the ratio of presenteeism to absenteeism days [54].When comparing to the general working population, it must be noted that remote workers are generally disproportionately often white-collar workers while blue-collar workers are underrepresented [11].This is also found in the present sample.Compared to the general working population in Germany [55], the service sector is overrepresented in this study, whereas the industry sector is underrepresented.Even if no conclusive statement regarding the target population is possible, it can be assumed that the sample may well reflect the population in key characteristics.
Due to the survey type, recall biases may occur.However, Strasser et al. [22] showed that retrospective measurements underestimate presenteeism compared to real-time measurements.Therefore, these effects may offset each other.In the current study, multivariate regression analysis combining the investigated variables detachment, supervisor support and enterprise size, and controlling other variables would have been desirable, but couldn't be carried out due to methodological constraints.
Even if only one way of operationalization of presenteeism was presented in this article, it should be mentioned that the different measurements and their different scopes of validity lead to difficulties in measuring and interpreting results as well as comparing them to existing research.Therefore, it is necessary to further examine the operationalization and measurements in comparative methods studies.
Implications for research and practice
The present study showed that remote presenteeism is a relevant phenomenon.Further research is needed to examine the prevalence of presenteeism in remote work in a representative sample.Reasons for remote presenteeism, possible moderators and mediators, and differences in decision-making behavior need to be investigated in more detail and compared to on-site work.More research on company characteristics, such as enterprise size, is necessary to get further insights into correlates of remote presenteeism.In addition, mixed types of presenteeism and absenteeism due to employees' individual load-and power control and their consequences should be further examined.These mixed types can be expressed, for example, by only attending a specific online meeting or doing an urgent task but being absent for the rest of the working day.It can be assumed that those mixed types are more prominent in remote work as work and relaxation can be combined more easily, compared to on-site work.Longitudinal or diary studies are also necessary to identify the direction and nature of associations and to determine the consequences of presenteeism in remote work.Multivariate analyses, such as regressions, are desirable to analyze the multilayered correlates of remote presenteeism in depth.
When examining the relationship between presenteeism and supervisor support, colleague support should be included in future research.That is, because research in organizational sociology suggests a close relation between both variables [56].Therefore, one variable might moderate or mediate the association with presenteeism of the other.The theoretical framework by Ferreira et al. [57] might help guide future research on remote presenteeism.It is necessary to conduct qualitative as well as quantitative studies and integrate the findings from studies with different operationalizations in a common framework, to be able to adequately reflect the complexity of the phenomenon.Furthermore, methodological studies are needed to improve operationalizations and measures of presenteeism on common grounds.
Concerning practice, the present study, first of all, implies the importance of raising awareness among companies about remote presenteeism.This is the pivotal point for developing actions.To date, however, knowledge about presenteeism as well as its causes and consequences has been insufficiently disseminated in companies and businesses, even concerning on-site work settings [58].The topic of remote presenteeism should be implemented in existing programs for remote work and remote supervision.
To monitor presenteeism in companies in the long term, the measure of total presenteeism days is a relevant indicator of health, performance, and costs [58].Especially in companies with already established employee surveys, the indicator of total presenteeism days can be added easily and raise attention to presenteeism in the long term.
So far, only a few intervention studies exist analyzing the effectiveness of measures to reduce on-site presenteeism [59].Research evidence suggests that workplace health promotion interventions designed to increase health and reduce absenteeism can also reduce (dysfunctional) presenteeism [60].Functional presenteeism doesn't need to be reduced and can even be healthpromoting.The current findings indicate that the health literacy of employees in remote work needs to be supported.Sociomedical guidelines for assessing work ability [61] can help understand the difference between illness and health-related (un)fitness for work.It seems particularly necessary for remote employees to develop competencies for appropriately assessing their health condition and accordingly making a health-conscious decision for functional or against dysfunctional presenteeism.
The associations of remote presenteeism with supervisor support and the ability to detach cannot provide any clear recommendations for action due to the small effect sizes.However, participants reported that detaching from remote work was more difficult for them, and they felt less supported by their supervisors.Accordingly, there is a need for action in both areas, which might also have beneficial effects on reducing dysfunctional presenteeism.To improve supervisor support training should be conducted that focuses on the specific characteristics of remote work.Supervisors must develop awareness that remote leadership needs to be adapted and that an indirect leadership style is usually effective [62].To improve the ability of remote employees to detach from work, (online) programs for health promotion, for example, improving the ability to draw boundaries between work and private life [63], should be implemented.
Conclusion
This study provides empirical findings in a subject area of great and probably growing societal relevance.The results indicate that presenteeism is widespread in remote work.Therefore, it should be considered in remote management and self-management practices.Detachment from work and supervisor support were found to be associated factors of remote presenteeism.Supervisor support can be improved by trainings, focusing on the specific conditions and the subsequent employees' needs in remote work.Detachment from work can be improved using established health promotion programs.Both might help reduce dysfunctional presenteeism.As employees show a higher tendency for presenteeism in remotecompared to on-site work, it seems necessary to foster the health literacy of remote employees.
Based on these results, further studies are necessary to identify mechanisms of presenteeism in remote work and to be able to derive more specific recommendations for action.In particular, it is important to analyze under what circumstances remote presenteeism can be functional and which conditions contribute to dysfunctional presenteeism.The aim is to create health-oriented settings in remote work that build on the advantages of working remotely -also with regard to presenteeism -while tendencies to work until complete exhaustion are prevented.
Fig. 3 .
Fig. 3. Difficulty to stay when feeling ill in remote work compared to on-site work. | 2023-11-10T16:46:30.553Z | 2023-11-01T00:00:00.000 | {
"year": 2023,
"sha1": "168a21bcf916b0606d0bbe7e45092be9e1975194",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.shaw.2023.11.001",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d6c8a76063fad9b2bcb31922b1f07df73984972c",
"s2fieldsofstudy": [
"Environmental Science",
"Psychology",
"Sociology"
],
"extfieldsofstudy": []
} |
229353737 | pes2o/s2orc | v3-fos-license | Caring for patients in the global programme to eliminate lymphatic filariasis
Abstract Clinical lymphatic filariasis (LF) is a debilitating, disfiguring medical condition with severe psychosocial consequences for patients and their families. Addressing these patients’ medical needs is a major component of the global programme to eliminate lymphatic filariasis (GPELF). In the 20 y of providing a minimal package of care many thousands of surgical operations to correct LF hydrocoeles been performed and national programmes in >90% of LF endemic countries have received the training needed to care for their patients. The creation of educational materials detailing appropriate patient care, together with increased funding, have been key catalysts in increasing awareness of clinical LF in recent years. Nevertheless, the implementation of care for these patients has often faced challenges that have led to delays in fully implementing the patient care component of GPELF; these include locating these often stigmatised individuals, maintaining provision of the necessary consumables (e.g. soaps and creams) and maintaining programme support within already overstretched national LF teams. As the LF global programme moves to achieve success by 2030 it will be vital to continue to focus efforts on the care and rehabilitation of those suffering from lymphoedema and hydrocoeles, learning from the experiences of the past 20 y.
Introduction
The clinical images of lymphatic filariasis-grossly swollen legs ( Figure 1) and enlarged male genitals-have been well known across the world for many years. Prior to the establishment of the global programme to eliminate lymphatic filariasis (GPELF) in 2000, the general understanding of this condition, and its effects on patients and their families, remained largely one of rumour rather than fact. Only a relatively few dedicated care centres and investigators around the world were focused on this disease, covering clinical and chemotherapeutic aspects in India, 1 Haiti 2,3 and Sri Lanka, 4 surgery in Ghana 5 and Brazil, 6 as well as studies of its immunology, entomology, chemotherapy and pathology in the UK 7 and the USA. [8][9][10] Little was known about the disease in large endemic areas of the world, notably those in Africa.
Although global infection prevalence is now relatively well understood, the actual numbers of people suffering from this condition across the world remains difficult to estimate; it was estimated that when the GPELF started in 2000 there were 17.7 million lymphoedema (LE) cases and 29.9 million hydrocoele cases, 11 but this is probably an understandable underestimation.
A key factor here is that a majority of those infected do not present with the classic clinical features and appear to be able to carry the parasite without any apparent adverse effects; it is known, however, that subclinical changes are present in many (and maybe all) infected people. 1 Factors that contribute to the difficulty in assessing LF patient numbers include the varied methodologies used to make these estimates, the fact that endemic areas are often rural, isolated and medically underserved, compounded by the frequent reluctance of patients to be identified. Clinical case numbers in endemic populations have often been estimated, 11 albeit crudely, to be approximately 2-6% of an endemic community, and in bancroftian filariasis areas it is common to find twice as many hydrocoele cases as lymphoedema cases. However, it is now clear that this proportion varies considerably with the level of endemicity, the methodology used to access these cases and, importantly, the infecting filarial species, because Wuchereria bancrofti induces clinically evident hydrocoeles whereas Brugia species do not. 12 The major approach to treating LF-induced lymphoedema patients has been, and remains, hygiene care of the affected skin (i.e. careful regular washing) and limb care (i.e. physiotherapy); secondary infections are an important contributor to the ongoing condition. Correction of hydrocoeles requires a comparatively standard surgical intervention in most cases, although many of these patients are unwilling to undergo, or able to afford, these operations; medical services in many endemic communities do not prioritise such elective surgeries. 13 The global LF patient care programme In 1997, the WHO approved GPELF as a public health issue. 14,15 Three specific patient care activities must be included in a country's final dossier report: first, knowing the disease burden, second, providing access to a minimum package of care (MPC; Table 1) and, third, ensuring that this MPC is of adequate quality and that it is sustainable. An additional component for success is the provision of continuing care after GPELF ends for those who need long-term medical support (Table 2); the major emphasis is on including care for LF patients in a country's national primary health system activities as part of a move towards universal healthcare (UHC). Major progress has been made in breaking transmission through mass drug administration (MDA), however, the provision of accessible essential care to those with clinical disease (officially known as morbidity management and disability prevention [MMDP]) still requires attention in many endemic countries. An important target for countries is to achieve 100% geographic coverage of availability of MPC. It should be noted that 100% coverage in this context is often defined as coverage of all LF endemic areas; however, the definition should include the whole country as LF clinical cases are often present in areas where MDA is not being carried out, including major urban areas.
A major purpose behind the need to acquire patient numbers and locations, other than for statistical identification and advocacy, is to identify where the necessary medical services should be placed so as to enable these individuals to gain access to essential care (e.g. adequate oversight from health workers trained in treating LE, local hydrocoelectomy surgery camps). 16,17 New approaches have been used to obtain the burden of LF patients including digital methods 18,19 and the use of local clinics; in general, it has been found that in many countries an essential route to locating patients is via local health workers. 17,20 The importance of clear messaging about the infection and availability of MMDP is central to successfully implementing care for those in need. Information that advises patients on the cause of their condition, the availability of help, clear instructions on how to carry out self-care and how to access surgery are vital to success. Communication with the endemic community as a whole is also essential; the visible provision of care to a community's patients is known to enhance the overall coverage of MDA. 19 Better understanding of the condition and its causes helps to reduce the stigma that virtually all patients experience. LF patient S49 of S54 care groups have been used successfully to assist lymphoedema patients in maintaining their treatment and to provide them with support from others who are similarly affected. 21 For hydrocoeles, most national programmes in W. bancrofti endemic areas have found that providing surgery for hydrocoele repair, often as collective surgical events in surgical camps, has been an easier form of LF patient care to provide due to the shorter duration of implementation compared with than needed for LE care; a surgical intervention (i.e. the number of surgeries carried out) is also a more distinct quantitative indicator for programme reporting purposes. Surgical camps are extremely useful for reducing a backlog of cases but they are also important for countries to build local capacity to ensure that surgical services are available locally for future cases. The importance of carrying out these surgeries with appropriate presurgery and postsurgery procedures and operative practice has been emphasised by the WHO. 22 For lymphoedema, the use of the MPC has been shown to be an important step in improving LF lymphoedema. 23,24 However, the issue of sustainability of care for those who have extensive limb changes and additional medical complications such as diabetes, hypertension and/or obesity remains an important consideration for national programmes. 20,24 These cases are likely to need long-term care that eventually will regular national health service support. Thus it is vital that LF programmes move to integrate with regular government-provided services. 16 The challenge that LF programmes usually face in implementing their MMDP can be summarised as: Providing sustainable care to stigmatised individuals, often living in low-income settings and frequently distant from medical services, for an often misunderstood, non-life threatening, chronic clinical condition. Many of the specific challenges are listed in Table 3. One reason why some national programmes have been more successful is that they have managed to adequately inform communities and LF patients that, in addition to MDA, a MPC will also be provided for those affected. 19,20 However, in many cases, countries are unable to provide these community medical services.
Achievements to date
Key strategic factors in the successes achieved to date in providing MPC within GPELF have been (1) the availability of simple and effective strategies to medically manage LF lymphoedema and hydrocoeles, and (2) a definition of achievable, practical requirements ( Table 2) for national success in achieving validation of the elimination of LF as a public health problem. These strategies have been shown to be cost-effective 25 and achievable by countries, and indeed now over 18 countries have achieved the validation of elimination which required implementation of MMDP. The impact of the use of the MPC on lymphoedema patients has been personally dramatic to those affected 20,24 and has, for example, reduced the incidence and severity of the debilitating acute attacks in the majority of patients, thus improving their quality of life and well-being significantly. Many cases of lymphoedema, especially those of lower grades of severity, have also seen significant reductions in their lymphoedematous condition.
Arguably, the most noticeable impact the MMDP activities have upon the disease is in the large number of hydrocoelectomy cases that have successfully been treated in W. bancrofti endemic areas and that these operations have been carried out under standard quality guidelines. 22,26 Many thousands of hydrocoelectomies have now taken place as part of GPELF, S50 of S54 An important advance in the care of LF hydrocoele patients is recognition by the World Bank's disease control priorities of hydrocoelectomy as one of the 28 essential surgeries that should be made available in primary health facilities. 26 Another positive move in recent years has come through interactions between LF clinicians and researchers with their counterparts from other endemic disease. The field training of leprosy care workers so that they are also able to provide care for LF patients in their villages has also been successful, 28 and similarly with podoconiosis lymphoedema programmes and LF teams in Ethiopia. 29 The participation of LF in international discussions with other neglected tropical disease skin disease care providers has been mutually beneficial to those involved and has helped maintain a high profile for LF MMDP. An additional important success in recent years has been the increase in funding from major donors for MMDP activities, for example, funding for LF surgery from Norway (health and development international), as well as for programmatic development of surgeries and lymphoedema care from the UK (department of international development) and the USA (United States Agency for international development and the END Fund).
Research into various aspects of LF care has increased and the number of papers focusing on LF patient care in the last 10 y is double the total published in the previous decade. Areas of current research focus include studies of new antibiotic approaches: for example, doxycycline is being considered for its potential to reduce lymphoedema 30,31 ; this multicentre trial is using new digital technology for assessing the size of lymphoedema. 32 Other studies underway range from the use of thermography to monitor acute filarial attacks to understanding and treating the mental health needs of patients. 33
Reaching success by 2030
The experiences of the past 20 y have shown that there are clearly some issues still to be addressed if the goal of GPELF completion by 2030 is to be achieved (Table 5). Other than the obvious-the provision of adequate funding and continuing advocacy at national ministerial level and also with donors concerning the necessity of MMDP for GPELF success-arguably one of the most important actions is to ensure that the implementors of national programmes understand how to carry out the required MMDP activities. This has become more necessary as countries move closer to being successful in breaking the transmission of infection. In parallel, there is a need now more than ever to increase international support for LF MMDP activities. To reach success by 2030 it will be important to focus support upon those countries that are having difficulty in implementing MMDP programmes and to specifically assist them with the more complicated of the two care activities, namely, support for those suffering from lymphoedema. Many hydrocoele cases still occur in bancroftian filariasis areas and this must also be attended to if GPELF is to eventually achieve an adequately high level of success.
S52 of S54
Many of the issues that, if addressed, will aid the MMDP component of GPELF to reach success by 2030 are listed in Table 5. Among the most important of these are the need (1) to assist endemic countries to reach full countrywide provision of MPC, and (2) to integrate care for patients into national health services, especially for those who need long-term (indeed life-long) care. It is also important to recognise how MMDP for LF is closely aligned with many other current global health initiatives such as global surgery, and WASH (WAter, Sanitation and Hygiene) and UHC; these global links can be used to enhance the progress to success with GPELF.
Although not specifically included in the GPELF dossier requirements, it will also be important to continue to improve the menu of care, and importantly the support for rehabilitation provided to LF patients. Important advances are likely to come as investigators explore mental health 33 and social aspects, develop new skin care therapies and gain a better understanding of the role of systemic agents such as antibiotics. 31 More versatile ways of assessing the success and impact of MMDP activities, including direct assessment of clinical and well-being improvements in patients, will also most likely provide benefit to the overall success of GPELF. One specific area that is a challenge, and which will become a greater challenge in the final stages of GPELF, is the provision of appropriate (usually long-term) care for the most affected and debilitated LF patients and the most serious lymphoedema cases, many of whom have comorbidities. These patients are always seen by the public as representing the dominant 'image of disease' and thus it is important to actively provide them with care and not neglect them.
Implementing a national programme to provide care for a condition that is not commonly considered to be an acute or lifethreatening presents a challenge in terms of financial costs and utilisation of medical staff. Ensuring support at ministerial level for the overall goal of 100% geographic coverage of endemic countries with LF MMDP services is essential for programmatic success.
Conclusion
The GPELF has brought clinical filariasis into a much clearer global focus and, although there is still much to be learned, major steps in our understanding of the physical, psychosocial and economic burden of LF have been achieved since it began. Increased efforts to provide both hydrocoele surgeries and lymphoedema/acute filarial attack treatments over the next decade are needed to ensure that current successes continue and, importantly, to ensure that both existing LF patients and any de novo cases are provided with continual quality care for as long as necessary.
Finally, it is key to emphasise that although the breaking of transmission of infection is a tremendous achievement for each endemic country, it is vitally important not to let this most laudable of epidemiological goals overshadow other aspects of MMDP in GPELF. The complete success of the programme involves both elements of the plan. A central reason for the original establishment of GPELF was the existence of LF patients, thus ultimate programmatic success can be defined as the absence of any new LF patients and the improved well-being of the remaining patients. | 2020-10-28T18:49:02.585Z | 2020-12-22T00:00:00.000 | {
"year": 2020,
"sha1": "f6a4387e963851bcfb5cabae3fb66b6d03bcc2a1",
"oa_license": "CCBYNC",
"oa_url": "https://academic.oup.com/inthealth/article-pdf/13/Supplement_1/S48/35056141/ihaa080.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "c4f66e1a10ba11f3686c5e9bba387ae2a68dbffc",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
18699091 | pes2o/s2orc | v3-fos-license | Alcohol Consumption and Breast Cancer Survival : A Meta-analysis of Cohort Studies
Breast cancer patients are often concerned that their lifestyle might affect their survival, so they often wonder whether modifying these behavior factors would improve their prognosis (Newcomb et al., 2013). Of these lifestyle factors, alcohol consumption is a modified factor. However, the available evidence is controvertible. Both meta-analysis and pooled study showed that (Poli et al., 2012; Bagnardi et al., 2013) alcohol consumption increases risk of breast cancer. However, the information on whether alcohol consumption is related to breast cancer survival is mixed (Hebert et al., 1989; Ewertz et al., 1991; Rohan et al., 1993; Fuchs et al., 1995; Zhang et al., 1995; Thun et al., 1997; Holmes et al., 1999; Saxe et al., 1999; Jain et al., 2000; McDonald et al., 2002; Borugian et al., 2004; Barnett et al., 2008; Reding et al., 2008; Franceschi et al., 2009; Flatt et al., 2010; Hellmann et al., 2010; Kwan et al., 2010; Allemani et al., 2011; Beasley et al., 2011; Breslow et al., 2011; Harris et al., 2012; Vrieling et al., 2012; Holm et al., 2013; Kwan et al.,
Materials and Methods
We did this systematic review of the available literature in accordance with Guidelines for Meta-Analyses and Systematic Reviews of Observational Studies [MOOSE] (Stroup et al., 2000) for the conduct of meta-analyses of observational studies.
Search Strategy
PubMed, EMBASE, and ISI Web of Knowledge were searched using (alcohol* or ethanol) and (breast cancer* OR breast neoplasm* OR breast tumor* OR breast adenocarcinoma).Subject heading terms were added in all searches for Pubmed and EMBASE searches.Reference lists from the review articles and identified studies were reviewed to identify further relevant citations.All the searches were conducted independently by two reviewers (Yunjiu Gou and Dingxiong Xie) in February 2013 without language restrictions; differences were resolved by discussion.
Inclusion criteria and study selection
We identified all published cohort studies that evaluated whether alcohol consumption affect the survival (including mortality or recurrence) in breast cancer patients.When multiple articles for a study were published, we used the most comprehensive data.Letters, comments, editorials, practice guidelines and trials published without the outcome measures of interest were excluded.Two reviewers (Yunjiu Gou and Yali Liu) independently assessed potentially relevant citations for inclusion, disagreements were resolved involved with a third reviewer (Kehu Yang).
Data abstraction
Using a standardized data extraction form by two authors (Li Bin and Zhang Jianhua), we collected the following baseline characteristics for cases and control groups: lead author, publication year, variation in age, sample size and outcomes.Any disagreements in abstracted data were resolved by a third reviewer (Xiaodong He).
Data analysis
Meta-analysis was conducted by comprehensive metaanalysis software.And we expressed the data using hazard ratio (HR) and its 95% confidence interval (95%CI).Data was pooled using the random-effects model.The percentage of variability across trials attributable to heterogeneity was estimated with the I 2 test, which was deemed significant when p was less than 0.05.Subgroup analyses of different ER statuses (ER positive vs. ER negative), different menopausal statuses (premenopausal vs. postmenopausal) and different doses based available data were conducted.Publication bias was assessed by visually inspecting a funnel plot.The small-study effect in terms of publication bias was also estimated using Egger's linear regression test.
Characteristic of included studies
All studies focused on breast cancer mortality and only five studies (Hebert et al., 1989;Saxe et al., 1999;Kwan et al., 2010;Holm et al., 2013;Kwan et al., 2013) focused on breast cancer recurrence.Of 25 studies, 14 were from USA, three were from Denmark, two were from Germany, and the rest were from Italy, Australia, Canada, France, Sweden and UK, separately.14 studies were about pre-diagnostic alcohol drinking, and ten were about post-diagnostic alcohol drinking, and one is about both pre-and post-diagnostic alcohol drinking.The total sample size for all included cohort studies were 719,555, the number of the breast cancer death was 10,912, and the number of the breast cancer recurrence was 2,027.The median follow-up ranged from 2.9 years to 18 years.And the other characteristics were presented in Table 1.
Subgroup analysis of different ER statuses showed the relationship of alcohol consumption with mortality and recurrence did not differ in ER positive and negative breast cancer patients.Subgroup analysis of different menopausal statuses showed the relationship of alcohol consumption with mortality did not differ in postmenopausal and premenopausal breast cancer patients.However, increased breast cancer recurrence was associated with premenopausal status, but not with postmenopausal status (Table 2).
Subgroup analysis of different alcohol consumption
Although the relationships of different alcohol consumption (<10 g/d, >10 g/d, <15 g/d, >15 g/d, and <20 g/d) with breast cancer mortality and recurrence were not significant, there seemed to be a dose-response relationship of alcohol consumption with breast cancer mortality and recurrence.Only one dose of alcohol consumption (>20 g/d) was with increased breast cancer mortality, but not with increased breast cancer recurrence (Table 3).
Publication bias
There was no significant publication bias based on funnel plot (Figure 4).Egger's test indicated that there was not a possibility of publication bias for the relationship of alcohol drinking with breast cancer mortality (intercept 0.26, 95%CI -0.54 1.07 p=0.26).
Discussion
Summary of finding: We included 25 cohort studies.The meta-analysis results showed that alcohol consumption was not associated with increased breast cancer mortality and recurrence after pooling all data from highest versus lowest comparisons.Subgroup analyses showed that pre-diagnostic or post-diagnostic, and ER negative or ER positive did not affect the relationship of alcohol consumption with breast cancer mortality and recurrence.Subgroup analysis of menopausal statuses showed that menopausal statuses did not affect the relationship of alcohol consumption with breast cancer mortality.But menopausal statuses might affect the relationship of alcohol consumption with breast cancer recurrence.Although the relationships of different alcohol consumption (<10 g/d, >10 g/d, <15 g/d, >15g/d, and <20 g/d) with breast cancer mortality and recurrence were not significant, there seemed to be a dose-response relationship of alcohol consumption with breast cancer mortality and recurrence.Only alcohol consumption of >20 g/d was associated with increased breast cancer mortality, but not with increased breast cancer recurrence.
The association between alcohol consumption and an increased risk of breast cancer has been established (Longnecker 1994;Suzuki et al., 2008).Studies have shown that mechanisms underlying the association of alcohol intake and breast cancer risk included increased estrogen and androgen levels, enhanced mammary gland susceptibility to carcinogenesis, increased mammary carcinogen DNA damage, and greater metastatic potential of breast cancer cells (Singletary et al., 2001;Reding et al., 2008).That is why reductions in alcohol intake were expected to improve breast cancer survival through similar mechanisms (Newcomb et al., 2013).However, our meta-analysis showed alcohol consumption was not associated with increased breast cancer mortality and recurrence.This is consistent with the result form a recent systematic review (Hauner et al., 2011).So based on available evidence, alcohol consumption did not affect breast cancer survival.
It is reported that the magnitude of damage by alcohol intake likely depends on the of alcohol consumed (Stroup et al., 2000;Bagnardi et al., 2013).A meta-analysis (Longnecker 1994) showed there was a modest doseresponse relationship between alcohol drinking and breast cancer.Our meta-analysis showed that the relationships of different alcohol consumption (<10 g/d, >10 g/d, <15 g/d, >15g/d, and <20 g/d) with breast cancer mortality and recurrence were not significant, but there seemed to be a dose-response relationship of alcohol consumption with breast cancer mortality and recurrence.And alcohol consumption of >20g/d was associated with increased breast cancer mortality, but not with increased breast cancer recurrence.Combining with the results of the associations of pre-diagnostic and post-diagnostic alcohol consumption with breast cancer mortality and recurrence, we could suggest that too much alcohol intake (>20 g/d) should be avoided.
It is said ER statuses could affect the relationship of alcohol consumption with breast cancer risk (Suzuki et al., 2008), as increased estrogen and androgen levels in women consuming alcohol appear to be important mechanisms underlying the association (Breslow et al., 2011).But based on the results of our meta-analysis, ER statuses did not affect the relationship of alcohol consumption with breast cancer mortality and recurrence.This might be due to few studies that stratified ER statuses when they evaluated the relationship of alcohol consumption with breast cancer mortality and recurrence.Available evidence showed that menopausal statuses could not affect the relationship of alcohol intake with breast cancer mortality, but might affect the relationship of alcohol intake with breast cancer recurrence.But there were also few studies that evaluated the relationship of alcohol intake with breast cancer mortality and recurrence by stratifying the menopausal statuses.
Strength and limitation: Our meta-analysis is the first meta-analysis that evaluated the relationship of alcohol intake with breast cancer mortality and recurrence.Our meta-analysis also conducted subgroup analyses of different ER statuses, menopausal statuses and different doses.Even though, our meta-analysis has its own limitations: first: few studies evaluated the relationship of alcohol intake with breast cancer mortality and recurrence by stratifying the menopausal and ER statuses.And few studies evaluated the relationship of alcohol intake with breast cancer recurrence.Second, as we did not find a suitable criterion to evaluate quality of these included studies, we skipped this part.It could make our analysis not as powerful as we expected.
Implications to research and practice: Based on our meta-analysis, alcohol drinking was not associated with increased breast cancer mortality and recurrence.Menopausal and ER statuses did not affect the relationship of alcohol drinking with breast cancer mortality, but whether menopausal and ER statuses affect the relationship of alcohol drinking with breast cancer recurrence deserves further researches.And as few studies evaluated the relationship of alcohol intake with breast cancer recurrence and evaluated the relationship of alcohol intake with breast cancer mortality by stratifying the menopausal and ER statuses, so these kinds of studies should be conducted.
And our meta-analysis also showed that the relationships of different alcohol consumption (<10 g/d, >10 g/d, <15 g/d, >15 g/d, and <20 g/d) with breast cancer mortality and recurrence were not significant, there seemed to be a dose-response relationship of alcohol consumption with breast cancer mortality and recurrence.And alcohol consumption of >20 g/d was associated with increased breast cancer mortality, but not with increased breast cancer recurrence.So in clinical practice, breast cancer patients should avoid too much alcohol drinking, such as alcohol consumption of >20 g/d.
Conclusion: Although our meta-analysis showed alcohol drinking was not associated with increased breast cancer mortality and recurrence, there seemed to be a dose-response relationship of alcohol consumption with breast cancer mortality and recurrence and alcohol consumption of >20 g/d was associated with increased breast cancer mortality.So in clinical practice, breast cancer patients should avoid too much alcohol drinking, such as alcohol consumption of >20 g/d.And studies that evaluated the relationship of alcohol intake with breast cancer recurrence and the relationship of alcohol intake with breast cancer mortality by stratifying the menopausal and ER statuses should be conducted.
Figure 2 .
Figure 2. Meta-analysis Results of Subgroup Analysis of the Relationship of Pre-and Post-Alcohol Consumption Statuses with Breast Cancer Mortality
Figure 4 .
Figure 4. publication Bias Analysis Based Funnel Plot | 2017-03-31T19:40:59.929Z | 2013-08-30T00:00:00.000 | {
"year": 2013,
"sha1": "2f2624b2d9f9505d0af33acee8ac1410f0424c20",
"oa_license": "CCBY",
"oa_url": "http://society.kisti.re.kr/sv/SV_svpsbs03V.do?cn1=JAKO201332479512450&method=download",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "2f2624b2d9f9505d0af33acee8ac1410f0424c20",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
259801416 | pes2o/s2orc | v3-fos-license | Glucosinolates: Structure, classification, biosynthesis and functions in higher plants
.
Glucosinolates are hydrolyzed by the myrosinase enzyme after the plant perceives a stress signal.After hydrolysis, the resulting compounds comprise isothiocyanates, thiocyanates, epiothionitriles, and nitriles (Hanschen and Schreiner, 2017).In species of the Brassicaceae family, the nutrients C, N, and S are used for the synthesis of GSLs (Koroleva et al., 2010;Jeschke et al., 2019), which fulfill important defense functions against biotic and abiotic factors (Feng et al., 2022).At least 130 different structures of GSLs have been identified in species of this plant family (Essoh et al., 2020).
The term glucosinolates refers to the glucosyl ("gluco") moiety, the presence of a sulfate (ate) group, and the property of being a precursor to a mustard oil (sinol).GSLs have been defined as natural substances found in different plants, and participate as part of a defense mechanism against herbivorous insects.Plants of the Brassicaceae family, such as cabbage (Chhajed et al., 2020), mustard (Brassica nigra; Blažević et al., 2020), broccoli, Brussels sprouts, cauliflower, kohlrabi (Brassica napobrassica), and radish (Marcinkowska et al., 2020), show these metabolites in the highest concentration.The amount of GSLs varies from one species to another and directly influences the type of plant tissue (Nguyen et al., 2020).
GSLs are responsible for the spiciness of species such as mustard or horseradish.In some cases, they may offer protection against some types of cancer.In particular, the raw consumption of species from the Brassicaceae family offers high bioavailability of isothiocyanates (produced by the myrosinase activity on GSLs).Among the isothiocyanates are benzyl isothiocyanate, phenethyl isothiocyanate, and sulforaphane [1-isothiocyanato-4-(methyl-sulfinyl) butane], which have proven to target proteins related to cell proliferation and homeostasis.The interaction of isothiocyanates with proteins involved in DNA repair inhibits the cell cycle and induces programmed cell death, actions that reduce tumor growth (Soundararajan and Kim, 2018).
GSLs are transported by the phloem and can help the plant defend itself against organisms that feed on phloem products and also acquire the ability to coordinate the synthesis and use of protective resources between different organs (Koroleva et al., 2010).The GSLs that fulfill the defense function of plants are thioglycosides that are derived from their hydrolysis.
STRUCTURE AND CLASSIFICATION OF GLUCOSINOLATES
The structure of glucosinolates consists of a sulfonated aldoxime domain linked to a β-Dthioglucose group together with a side chain (aglycone) derived from one or several amino acids (Figure 1; Blažević et al., 2020;Sugiyama et al., 2021).
The final products of the degradation of GSLs depend on factors such as pH, availability of ferrous ions, and proteins that interact with the thioglucoside glucohydrolase enzyme (Martínez-Ballesta et al., 2013).
The storage of GSLs and thioglucoside glucohydrolase enzymes is spatially distinct.Therefore, they only interact after the plant has faced some kind of stress.Specialized cell types can act as different storage locations: S-cells for GSLs and myrosin cells for classical myrosinases (Mitreiter et al., 2021).S-cells contain up to 40% of the total sulfur of Arabidopsis thaliana flower stem tissue (Koroleva et al., 2010).
BIOSYNTHESIS OF GLUCOSINOLATES
The biosynthesis of glucosinolates occurs mainly in the leaves, from where they are transported to other organs of the plant.Their biosynthesis in different organs is more active in young growth stages and less so in mature stages (Feng et al., 2022).
The biosynthesis of GSLs consists of three stages (Figure 4): I) chain elongation in which a methylene group is inserted into the side chain of aliphatic amino acids; II) the metabolic reconfiguration of the rest of the amino acids to produce the central structure; and III) the modification of the core structure to produce GSLs with various aglycone structures (Nguyen et al., 2020).
The first stage begins with a deamination of the amino acids by branched-chain amino acid aminotransferase (BCAT) that transforms them into 2-oxoacids, which condense with acetyl-coenzyme A by the action of the methylthioalkylmalate synthases enzyme (MAMs) and thus forms a 2-malate derivative.This last compound is isomerized to a 3-malate derivative by isopropylmalate isomerase (IPMI).This is followed by decarboxylation by the isopropylmalate dehydrogenase enzyme (IPMDH) and produces an elongated 2-oxoacid intermediate that can undergo transamination to provide extended amino acids for the next stage or re-enter the transformation cycle for further elongation (Nguyen et al., 2020; Figure 4).
In the second stage (Figure 4) there is an oxidation of the amino acid into aldoximes.This oxidation is catalyzed by three enzyme systems [cytochrome-P450 (CYP79) dependent monooxygenase, flavin-containing monooxygenase, and peroxidase].The participation of each enzymatic system depends on the nature of the amino acid precursors.Cytochrome monooxygenases CYP83 activate the aldoxime resulting from oxidation of the amino acid to the corresponding thiohydroxymate.The activated aldoxime is conjugated with reduced glutathione (GSH), which donates S to produce the intermediate thiohydroxymate.The S-alkyl-thiohidoximate intermediate formed is cleaved by the activity of a C-S lyase enzyme: SUR1 to form thiohydroxymates.These thiohydroxymates are transformed by the UDP-glucose:thiohydroxymic acid S-glucosyltransferases (S-GT) and desulfoglucosinolate sulfotransferases enzymes to produce the core structure of GSLs with the corresponding side chains (Nguyen et al., 2020).
In the third stage, chemical transformations of the GSLs side chains occur through enzyme-catalyzed oxidations, eliminations, alkylations, and esterifications (Figure 4).These modifications contribute to the structural diversity of GSLs (Nguyen et al., 2020).
FUNCTIONS OF GSLs IN PLANTS
GSLs are widely synthesized in species of the Capparidaceae, Brassicaceae, Resedaceae, and Moringaceae families (Lockwood, 1988), although most studies have been done on species of the Brassicaceae family.
When the species of the Brassicaceae family suffer an attack, the GSLs are hydrolyzed by thioglucoside glucohydrolases or myrosinases enzymes into different defense products, including isothiocyanates, which are the most characterized.Isothiocyanates are toxic to insect pests and disease-causing pathogenic microorganisms.However, when synthesized excessively, these compounds can be harmful to the plant, as they can cause stomatal closure, alter microtubules in the cytoskeleton, deplete reduced glutathione (GSH), inhibit root growth or induce cell death (Ting et al., 2020).
GSLs act as excellent defense mechanisms against generalist herbivores, but are less effective against specialist herbivores (Schweizeir et al., 2013).In addition, these sulfur compounds can also be toxic to microbial pathogens both in the soil and in the aerial part of the plant (Mitreiter et al., 2021).To produce crops with a greater amount of desirable compounds, several strategies can be followed.The first is to select species, genotypes, or cultivars that contain a genetically determined higher level of phytochemicals (Bouargalne et al., 2022;Zhan et al., 2022).The second is to manipulate the growth factors and environmental conditions for plant cultivation (Trejo-Téllez et al., 2019;Šamec et al., 2021).A third alternative is the use of genetic engineering, metabolic engineering, and genome editing (Miao et al., 2021).
In adverse environmental conditions such as drought, salinity, extreme temperatures, and excessive exposure to UV radiation, plants activate defense mechanisms that include the accumulation of specialized metabolites or phytochemicals (Šamec et al., 2021).These natural plant defense mechanisms can be stimulated during the cultivation of certain species, which triggers greater production of desirable compounds.
Eustressors are biological, physical, or chemical stressors that activate signaling pathways that lead to increased content of bioactive compounds.Salinity is considered a chemical stress factor that affects the physical quality and chemical composition of various plant products (Rouphael et al., 2018).
By increasing the level of salinity in crops of species of the Brassicaceae family, a concomitant rise in the content of bioactive compounds can be observed, at the expense of their growth and yield (Santander et al., 2022).
Salinity differentially affects the metabolism of GSLs in plants, which depends on environmental conditions such as temperature and radiation, nutritional management, type of GSL synthesized, and the genotype of the plant (Rios et al., 2020).
In Brassica oleracea L. var.italica exposed to 40 and 80 mM NaCl for two weeks, an increase in the content of GSLs was observed, the same as in Brassica rapa L. exposed to 20, 40, and 60 mM NaCl for five days (Steinbrenner et al., 2012).
In species of the Brassicaceae family, GSLs can represent up to 30% of the total sulfur concentrations (Falk et al., 2007;Sugiyama et al., 2021).This means that GSLs can be nutrient reservoirs, which under nutrient deficiency can be hydrolyzed by myrosinase enzymes, so that sulfur is reallocated to primary metabolites such as cysteine (Sugiyama et al., 2021).Thus, under stress conditions, these secondary metabolites can be degraded for the formation of other molecules.
Given the importance of different species of the Brassicaceae family in human nutrition, it is important to highlight that GSLs can contribute to improving health as these compounds have shown protective properties against the incidence of cancer and cardiovascular diseases (Traka, 2016).
CONCLUSIONS
GSLs are secondary metabolites rich in N and S; they are mainly synthesized by plant species of the Brassicaceae family.By the type of amino acid from which GSLs come, they are divided into aliphatic, aromatic, and indole GSLs.The products of their hydrolysis mediated by myrosinase enzymes play a role in increasing tolerance to biotic and abiotic stress factors.In addition, given their composition, they can serve as a nutrient reservoir under deficiency conditions.Finally, GSLs have nutritional functions and can contribute to improving human health.
Figure 2 .
Figure 2. Classification of glucosinolates (GSLs) according to the type of precursor amino acid.Aliphatic GSLs are derived from methionine, isoleucine, leucine, or valine.Aromatic GSLs are derived from phenylalanine or tyrosine.Indole GSLs are derived from tryptophan. | 2023-07-12T16:58:42.695Z | 2023-06-05T00:00:00.000 | {
"year": 2023,
"sha1": "2b139736d4c38ff8915c7fcd11429f0b9dfce86f",
"oa_license": "CCBYNC",
"oa_url": "https://mail.revista-agroproductividad.org/index.php/agroproductividad/article/download/2567/2073",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "3869e5f600e779257f2a8dd6fce093ab7090844d",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": []
} |
234839356 | pes2o/s2orc | v3-fos-license | A Novel Anti-Jamming Technique for INS/GNSS Integration Based on Black Box Variational Inference
In this paper, a novel anti-jamming technique based on black box variational inference for INS/GNSS integration with time-varying measurement noise covariance matrices is presented. We proved that the time-varying measurement noise is more similar to the Gaussian distribution with time-varying mean value than to the Inv-Gamma or Inv-Wishart distribution found by Kullback–Leibler divergence. Therefore, we assumed the prior distribution of measurement noise covariance matrices as Gaussian, and calculated the Gaussian parameters by the black box variational inference method. Finally, we obtained the measurement noise covariance matrices by using the Gaussian parameters. The experimental results illustrate that the proposed algorithm performs better in resisting time-varying measurement noise than the existing Variational Bayesian adaptive filter.
Introduction
The loosely integrated inertial navigation system (INS) and global navigation satellite system (GNSS) corrects INS errors and helps INS to complete navigation tasks by providing the velocity and position of the GNSS. However, GNSS signals are extremely weak when they reach the ground, hence, the signals are vulnerable to interference [1] such as radio signals that fall into the navigation signal pass band and multipath interference caused by reflection, scattering [2] or obstruction from buildings, tunnels and trees [3]. All of these interferences can lead to unknown noise statistics in GNSS navigation parameters. Thus, GNSS is unable to output accurate navigation information and eventually the INS/GNSS integration is unable to work properly.
The results of interference on GNSS parameters can be divided into three categories according to various studies. The first is GNSS outages. Because the GNSS cannot output navigation parameters, it is unable to complete auxiliary tasks [4]. Researchers usually solve such cases of interference by machine learning methods, such as neural network [5], regression algorithm [6], fault detection and isolation [7] or multiple receivers [8]. The second result is outliers in the GNSS navigation parameters [9]. For this type of interference, researchers usually set up the Student's model to solve these issues [10]. The third outcome is that the navigation accuracy of the GNSS is lower than the accuracy without interference, but it can still complete auxiliary tasks, for example, the time-varying noise caused by interference. In this paper, the third type of interference noise will be researched.
The adaptive Kalman filter (AKF) is the most common method to solve the problem of time-varying noise [11]. The Sage-Husa AKF estimates the noise statistics recursively based on the maximum a posterior criterion [12,13]. However, the measurement noise of the actual system may be too small compared with the theoretical value, or the initial state noise setting might be too large, which results in the measurement noise covariance matrix (MNCM) losing its positivity and causes filtering divergence. Li et al. presented a multiple model AKF (MMAKF), which can deal with the model uncertainty by operating a bank of Kalman filters with different models simultaneously [14]. However, the MMAKF suffers from substantial computational complexities, and thus, MMAKF has poor real-time performance. Simo et al. first proposed the variational Bayesian (VB) AKF (VBAKF) model to solve the time-varying noise problem [15]. The VBAKF algorithm iteratively estimates the MNCM by using the variational inference (VI) method. The VBAKF algorithm is currently favored by many scholars because of its low computation requirements and accurate estimation. Li et al. applied VBAKF in target tracking to achieve accurate estimation of targets [16]. Shen et al. used VBAKF in INS/GNSS integration to estimate unknown MNCM [17]. Yu et al. proposed a series of VB nonlinear filtering for unknown MNCM, such as the VB Extended Kalman Filter (VBEKF), VB Untraced Kalman Filter (VBUKF), etc. [18]. The VB Cubature Kalman Filter (VBCKF) method was proposed to improve the estimation accuracy of nonlinear systems in [19]. VB and Monte Carlo sampling were used to solve the unknown measurement noise and uncertain parameters in [20]. Huang et al. solved the problem of inaccurate measurement noise and process noise with VBAKF [11]. However, Xu et al. proposed that the distribution of the process noise covariance matrix (PNCM) and the distribution of system state are non-conjugate, which cannot be solved directly by VBAKF. Therefore, they solved the inaccurate PNCM by black box variational inference (BBVI) [21].
In the existing anti-jamming methods based on VBAKF, to satisfy the conjugate conditions of the VBAKF algorithm, the prior distribution of MNCM was assumed to be an Inv-Gamma distribution or Inv-Wishart distribution. However, in the trial data of INS/GNSS integration, it was found that the time-varying noise was more similar to the Gaussian distribution with a changeable mean value. Therefore, in order to ensure the assumed approximate distribution was closer to the real distribution and estimate the MNCM more accurately, the Gaussian distribution was proposed as the prior distribution of MNCM in this paper. However, this assumption leads to a non-conjugate problem between the prior distribution and likelihood distribution. For this reason, the VBAKF algorithm cannot be used to estimate MNCM.
We learnt from [21], which used BBVI to solve the non-conjugate problem between the PNCM distribution and system state distribution. In this paper, a novel anti-jamming technique for INS/GNSS integration based on black box variational inference was proposed. We assumed the prior distribution of the MNCM was a Gaussian distribution. Then, we estimated the gradient of the Gaussian distribution parameters by using the BBVI algorithm. Lastly, we calculated the MNCM by using the parameters. The proposed algorithm and existing VBAKF were applied to the problem of INS/GNSS integration with time-varying measurement noise. Experimental results show that the proposed filter has a smaller root mean square error (RMSE) than existing VBAKF methods.
The remainder of the paper is organized as follows. First, we analyze the problems of VBAKF estimates of MNCM in the INS/GNSS integration. Second, the novel anti-jamming algorithm for INS/GNSS integration is presented. Third, the experimental results are analyzed and discussed. Finally, the conclusions are presented.
Problem Description
In this section, first, we establish the system models of INS/GNSS integration. Then, the influence of time-varying noise on the system is analyzed, and the VBAKF estimation method of MNCM is introduced. Finally, we analyze the distribution type of time-varying noise and the problems related to the existing VBAKF algorithm.
INS/GNSS Integration System Model
The loosely integrated INS/GNSS navigation system can output information on velocity, position, and attitude with the measurement information for velocity and position. The system models as follows.
As the application background in this paper is vehicle navigation, the navigation parameters of the horizontal are required to be higher. So, for the velocity and position Appl. Sci. 2021, 11, 3664 3 of 18 parameters, we only take their horizontal parameters to establish the state equations. We take the errors of horizontal velocity, horizontal position, platform angles, accelerometer bias, and gyroscope bias as the state vectors [22], which can be written as where X is the state vector, δv E and δv N are the horizontal velocity errors, δϕ and δλ are the horizontal position errors, φ E , φ N , and φ U are the misalignment angles, ∇ x , ∇ y , and ∇ z are the accelerometer bias, ε x , ε y , and ε z are the gyroscope bias.
The measurement vectors derived from the difference between the velocity and position from the INS and GNSS can be given by where Z is the measurement vector, V INS and V GNSS are the horizontal velocity of INS and GNSS, respectively, and P INS and P GNSS are the horizontal position of INS and GNSS, respectively.
The state equations and measurement equations of the system are established based on the navigation coordinate system. The system models can be described as where Φ is the one-step state transition matrix, W is system noise, The state equations of Equation (1) are described in detail as follows.
where V E V N are the horizontal velocities, ϕ λ are the horizontal positions, ω ie is the angular rate of the earth's rotation, R M and R N are the earth radius, f E f N f U are the specific force. In order to clearly describe the effect of time-varying noise on INS/GNSS integration, we conducted a simulation experiment. INS/GNSS integration operates without interference for 10 min. Then, the standard deviation of random white noise at the GNSS east and north position was increased from 5 m to 50 m. The standard deviation of random white noise at the east and north velocity was increased from 0.2 m/s to 2 m/s. The interference lasted for 5 min and then returned to the non-interference state. The total time of the simulation was 30 min. The influence of the time-varying noise on the velocity and position errors are shown in Figures 1 and 2. we conducted a simulation experiment. INS/GNSS integration operates without interference for 10 min. Then, the standard deviation of random white noise at the GNSS east and north position was increased from 5 m to 50 m. The standard deviation of random white noise at the east and north velocity was increased from 0.2 m/s to 2 m/s. The interference lasted for 5 min and then returned to the non-interference state. The total time of the simulation was 30 min. The influence of the time-varying noise on the velocity and position errors are shown in Figures 1 and 2. As demonstrated in Figures 1 and 2, the velocity and position have large unsteady errors when time-varying noise appears in the measurement information. Therefore, it is necessary to estimate the accurate value of the MNCM to suppress the interference caused by time-varying noise to the navigation system.
Estimating MNCM with VBAKF
The statistical characteristics of V will be changed when GNSS is disturbed. Consequently, the MNCM is uncertain. At this point, if we continue to use the MNCM of the initial setting, the estimated values of the navigation parameters are not accurate. Therefore, MNCM should be estimated accurately to improve navigation accuracy.
The idea of estimating MNCM by VBAKF can be summarized as follows. First, the real distribution of MNCM is replaced with an approximate distribution. Then, Kullback-Leibler divergence (KLD) is used to measure the degree of similarity between the approximate distribution and the real distribution. When the evidence lower bound (ELB) reaches the maximum value, the KLD value is zero. That is, the approximate distribution is equal to the real distribution. Thus, the problem of estimating MNCM turns into the problem of solving the maximum value of ELB. The ELB maximum value can be solved by the VB general formula. However, the condition for using the VB general formula to solve the ELB maximum value is that the approximate distribution and likelihood distributions must be conjugate.
The steps for estimating time-varying MNCM by VBAKF are as follows.
In the framework of the Kalman filter, the likelihood distribution ( ) Gaussian distribution, i.e., As demonstrated in Figures 1 and 2, the velocity and position have large unsteady errors when time-varying noise appears in the measurement information. Therefore, it is necessary to estimate the accurate value of the MNCM to suppress the interference caused by time-varying noise to the navigation system.
Estimating MNCM with VBAKF
The statistical characteristics of V will be changed when GNSS is disturbed. Consequently, the MNCM is uncertain. At this point, if we continue to use the MNCM of the initial setting, the estimated values of the navigation parameters are not accurate. Therefore, MNCM should be estimated accurately to improve navigation accuracy.
The idea of estimating MNCM by VBAKF can be summarized as follows. First, the real distribution of MNCM is replaced with an approximate distribution. Then, Kullback-Leibler divergence (KLD) is used to measure the degree of similarity between the approximate distribution and the real distribution. When the evidence lower bound (ELB) reaches the maximum value, the KLD value is zero. That is, the approximate distribution is equal to the real distribution. Thus, the problem of estimating MNCM turns into the problem of solving the maximum value of ELB. The ELB maximum value can be solved by the VB general formula. However, the condition for using the VB general formula to solve the ELB maximum value is that the approximate distribution and likelihood distributions must be conjugate. The steps for estimating time-varying MNCM by VBAKF are as follows. In the framework of the Kalman filter, the likelihood distribution p(Z k |R k ) is a Gaussian distribution, i.e., where R is the MNCM. N(·; * , * ) denote the Gaussian probability density function (PDF). In order to satisfy the conjugate condition, the prior distribution of MNCM p(R k |Z 1:k−1 ) is assumed to be an Inv-Gamma distribution or Inv-Wishart distribution. We take the Inv-Gamma distribution, for example. It can be described as where d is the dimensions of R. α and β are the parameters of the Inv-Gamma distribution. Inv-Gamma(·; * , * ) denotes the Inv-Gamma PDF.
After the kth observation, we will replace p(R k |Z 1:k ) with a new approximate distribution q(R k |Z 1:k ). For simplicity, we omit the statement that the parameter R k is dependent on Z 1:k in the approximate distribution. Thus, the new approximate distribution can be expressed as q(R k ).
According to the VB general formula, and taking the log of both sides of the general formula, we can obtain the ELB, which can be expressed as where C is the constants. The VB general formula used in (5) can be described as where E[·] denotes the expectation of q(θ i ). According to (5) and the logarithm of the Inv-Gamma PDF, we can see that R k still obeys the Inv-Gamma distribution, which can be described as According to the characteristics of the Inv-Gamma PDF, the expressions of the parameters are given by where P is the predicted error covariance matrix.
According to the characteristics of the Inv-Gamma distribution, the MNCM estimation can be expressed asR Appl. Sci. 2021, 11, 3664 6 of 18
Measurement Noise Analyses
The distribution of the time-varying noise caused by GNSS signals interference is most similar to Gaussian distribution with a time-varying mean value. However, Inv-Gamma distribution or Inv-Wishart distribution is assumed as the prior distribution of the MNCM, when estimating the MNCM by VBAKF algorithm. All of this is to ensure the prior distribution and the likelihood distribution conjugate. Therefore, it will affect the estimation accuracy of the MNCM.
In order to prove the correctness of the above discussion, KLD was used to calculate the degree of similarity between time-varying noise and each distribution. We took a piece of trial data from the GNSS as an example. The time-varying noise of the GNSS east position is shown in Figure 3, where the data is derived from near the Airport Road. The distribution of the MNCM of the east position is shown in Figure 4. The Gaussian, Inv-Gamma, and Inv-Wishart distributions are shown in Figure 5.
Appl. Sci. 2021, 11, x FOR PEER REVIEW of trial data from the GNSS as an example. The time-varying noise of the GNSS ea tion is shown in Figure 3, where the data is derived from near the Airport Road. T tribution of the MNCM of the east position is shown in Figure 4. The Gaussia Gamma, and Inv-Wishart distributions are shown in Figure 5. From Figures 4 and 5, we can see that the time-varying noise still conforms to the Gaussian distribution, it is just that the mean value has changed. However, the distribution of time-varying noise is not similar to the Inv-Gamma and Inv-Wishart distributions. Next, we analyze the similarity between the noise distribution and each approximate distribution based on KLD.
The KLD formula is as follows where p(x i ) denotes the distribution of the GNSS east position time-varying noise. q(x i ) denote the Gaussian, Inv-Gamma, or Inv-Wishart distribution. N is the number of samples. According to (12), we can obtain the average Kullback-Leibler divergence (AKLD), which be defined as where M = 500 denote the numbers of experiment.
The AKLD values for each distribution and time-varying noise are shown in Table 1. It can be seen from Table 1 that the Gaussian distribution is more similar to the distribution of the time-varying noise. However, the Inv-Gamma and Inv-Wishart distribution have poor similarity with the time-varying noise distribution.
The Novel Anti-Jamming Technique
In this section, first, the prior distribution of MNCM is assumed to be a Gaussian distribution, and the problems of the VBAKF algorithm are analyzed according to this assumption. Second, on the basis of the above analysis, we propose estimating the MNCM and completing the anti-jamming task of INS/GNSS integration by using the BBVI algorithm.
Prior Distribution of MNCM
The proofs in Section 2.3 show that the time-varying noise is closer to the Gaussian distribution. Therefore, we assume the prior distribution of the MNCM as Gaussian distribution. The prior distribution can be described as where µ and σ are the expectation and variance of the R, respectively. After the kth observation, we replace p(R k |Z 1:k−1 ) with q(R k |Z 1:k ). According to general Formula (6), and taking the log of both sides of (6), the expression can be written as By comparing the logarithmic expression of the Gaussian PDF, from (15) it can be seen that the posterior distribution of MNCM no longer obeys the Gaussian distribution. That is, the prior distribution and the likelihood distribution are non-conjugate. Therefore, the conditions for VBAKF are not satisfied. The ELB cannot be calculated by the VI method, and the MNCM cannot be estimated.
Fortunately, Rajesh Ranganath et al. proposed the BBVI method, which aims to optimize the ELB by stochastic optimization. The condition for the approximate distribution and likelihood distribution to be conjugate is not required. More specifically, first, we form the derivative of the objective as an expectation with respect to the variational approximation; second, sampling from the variational approximation is used to get noisy but unbiased gradients; and last, we use the gradients to update the parameters of the approximate distribution [23].
BBVI Filter Based on Gaussian Distribution
By referencing the BBVI idea in [23] and the BBAKF method in [21], we derived the BBVI filter (BBVIF) anti-jamming algorithm with Gaussian distribution.
As in Section 3.1, we assume that the prior distribution of MNCM is a Gaussian distribution. The expression is the same as (14). After the kth observation, the gradients of ELB with respect to parameters can be expressed as where λ = {µ, σ} denotes the parameters of R. ∇ λ L (λ) denotes the gradient of λ. The ∇ λ L (λ) can be approximated by a stochastic gradient estimator∇ λ L (λ) with Monte Carlo samples from variational distribution [21].∇ λ L (λ) can be described aŝ where S is the number of samples.
With (17), we can use stochastic optimization to optimize the ELB of µ k and σ k . The expression can be written aŝ where the log p(Z k,S , R k,S Z 1:k−1,S ) is given by log p(Z k,S , R k,S Z 1:k−1,S ) = log p(Z k,S R k,S ) · p(R k,S Z 1:k−1,S ) = log p(Z k,S R k,S , Z 1:k−1,S ) + log p(R k,S Z 1:k−1,S ) (20) with log q(R k,S ) = ln 1 2π According to [21], we updated the parameters of µ k and σ k by stochastic optimization. The iterative method is as followŝ where g k and G k are updated by the Adaptive Gradient (AdaGard), the expressions are given by The variance in the estimator of the gradient (under the Monte Carlo estimate in Equation 16) may be too large to be useful when we use the method above to maximize the ELB. In practice, a higher variance gradient requires very small steps, which will lead to a slow convergence. Therefore, the variance needs to be reduced in the original method. Here, we refer to [23], and adopt the control variates method. The minimum variance is given by With control variates, the new method to compute the gradients of ELB with respect to parameters can be expressed aŝ According to the characteristics of Gaussian distribution, the estimated value of the MNCM can be given byR The procedure for the anti-jamming method based on BBVI is summarized in Algorithm 1.
Time update (the same as Kalman Filter)
1.
( )
To clearly demonstrate the difference between the VBAKF algorithm based on the Inv-Gamma or Inv-Wishart distribution and the proposed algorithm, the two algorithms are shown in Figure 6.
The posterior distribution U n d e r t h e c o n j u g a t e d conditions. Thus we can get the p ara me t er s o f p ( R k ) , wh i c h areα k|k-1 ,β k|k-1 ,μ k|k-1 and U k|k-1 .
We can abtain the R based on the characters of Inv-Gamma or Inv-Wishart distribution.
Stochastic optimization is used t o es t i m a t e t h e g ra di en t o f parameters μ k and σ k based on BBVI algorithm.
Update the parameters μ k|k-1 and σ k|k-1 based on the learning rate RMSProp.
We can abtain the R based on t h e c h a ra c t e rs o f G au s si an distribution.
The prior distribution is closer to Gaussian distribution
VBAKF algorithm
The proposed algorithm Figure 6. The algorithm flowchart of VBAKF and the proposed algorithm where l is the degrees of freedom parameter, and U is the inverse scale matrix.
As can be seen in Figure 6, in order to describe the prior distribution of the MNCM more accurately, the Gaussian distribution was selected as the approximate distribution of MNCM in this paper. Furthermore, in order to solve the problem that the prior distribution and likelihood distribution are non-conjugate after selecting the Gaussian distribution, and the problem that VBAKF cannot be applied, we proposed estimating the MNCM by the stochastic optimization of the BBVI algorithm.
Experimental Results and Analyses
In a previous study, the performance of the VBAKF was compared with the existing AKF in dealing with time-varying MNCM, and VBAKF proved to be superior to the existing AKF [11]. Therefore, we only compared the BBVI with the VBAKF algorithm, whose prior distribution is an approximate Inv-Gamma and Inv-Wishart distribution. Furthermore, the performance of the BBVIF anti-jamming algorithm with Gaussian distribution was verified by simulation and trial data.
Experiments with Simulation Data
The validity of the proposed algorithm was verified by simulations. The simulation conditions are set as follows.
The initial position is latitude 45.783898 degrees north and longitude 126.69458 degrees east. The Simulation lasts 60 min.
The specifications of the INS and GNSS are listed in Table 2. As can be seen in Figure 6, in order to describe the prior distribution of the MNCM more accurately, the Gaussian distribution was selected as the approximate distribution of MNCM in this paper. Furthermore, in order to solve the problem that the prior distribution and likelihood distribution are non-conjugate after selecting the Gaussian distribution, and the problem that VBAKF cannot be applied, we proposed estimating the MNCM by the stochastic optimization of the BBVI algorithm.
Experimental Results and Analyses
In a previous study, the performance of the VBAKF was compared with the existing AKF in dealing with time-varying MNCM, and VBAKF proved to be superior to the existing AKF [11]. Therefore, we only compared the BBVI with the VBAKF algorithm, whose prior distribution is an approximate Inv-Gamma and Inv-Wishart distribution. Furthermore, the performance of the BBVIF anti-jamming algorithm with Gaussian distribution was verified by simulation and trial data.
Experiments with Simulation Data
The validity of the proposed algorithm was verified by simulations. The simulation conditions are set as follows.
The initial position is latitude 45.783898 degrees north and longitude 126.69458 degrees east. The Simulation lasts 60 min.
The specifications of the INS and GNSS are listed in Table 2. represents the total number of Mont Carlo runs. A previous study proved the influence of different sample numbers on the BBVI method [21], and concluded that the algorithm has the fastest convergence speed when S = 50. Thus, S = 50 was selected as the sample number in this paper. In addition, the parameters of the proposed method were set as Before the navigation parameters of each algorithm were shown, we used Inv-Gamma, Inv-Wishart and Gaussian prior models to estimate the MNCM values. We take the north velocity for example, and the estimated MNCM values and the true MNCM values of the north velocity are shown in the Figure 9. For a clearer description of the estimated MNCM values, a larger version of Figure 10 is shown in Figure 9. To evaluate the accuracy of the methods, the RMSE of velocity and position were used as performance metrics. It was defined as where x l k denotes the true navigation parameters, andx l k is the estimated navigation parameters. M = 500 represents the total number of Mont Carlo runs.
A previous study proved the influence of different sample numbers on the BBVI method [21], and concluded that the algorithm has the fastest convergence speed when S = 50. Thus, S = 50 was selected as the sample number in this paper. In addition, the parameters of the proposed method were set as η= 10, γ = 0.5, g = 0, G = 0, ε = 1 × 10 −10 .
Before the navigation parameters of each algorithm were shown, we used Inv-Gamma, Inv-Wishart and Gaussian prior models to estimate the MNCM values. We take the north velocity for example, and the estimated MNCM values and the true MNCM values of the north velocity are shown in the Figure 9. To evaluate the accuracy of the methods, the RMSE of velocity and position were used as performance metrics. It was defined as x denotes the true navigation parameters, and ˆl k x is the estimated navigation parameters. 500 M = represents the total number of Mont Carlo runs. A previous study proved the influence of different sample numbers on the BBV method [21], and concluded that the algorithm has the fastest convergence speed when S = 50. Thus, S = 50 was selected as the sample number in this paper. In addition, the pa rameters of the proposed method were set as =10 Before the navigation parameters of each algorithm were shown, we used Inv Gamma, Inv-Wishart and Gaussian prior models to estimate the MNCM values. We take the north velocity for example, and the estimated MNCM values and the true MNCM values of the north velocity are shown in the Figure 9. For a clearer description of the estimated MNCM values, a larger version of Figure 10 is shown in Figure 9. For a clearer description of the estimated MNCM values, a larger version of Figure 10 is shown in Figure 9. It can be seen from Figures 9 and 10 that all three models can estimate the MNCM values of north velocity. It shows that all three methods can play an anti-jamming role However, the MNCM values estimated by Inv-Gamma and Inv-Wishart prior models were both smaller than the true MNCM values, and the MNCM estimated by the Gaussian prior model is more accurate than the other two models. This would lead to inaccurate estimation of navigation parameters by the Inv-Gamma or Inv-Wishart models. Therefore the anti-jamming effect is not as good as the proposed algorithm.
We evaluated the anti-jamming effect of each algorithm from the estimated naviga tion parameters. The RMSEs of the velocity and position from the existing VBAKF and the proposed algorithm are shown in Figures 11 and 12, respectively. It can be seen from Figures 9 and 10 that all three models can estimate the MNCM values of north velocity. It shows that all three methods can play an anti-jamming role. However, the MNCM values estimated by Inv-Gamma and Inv-Wishart prior models were both smaller than the true MNCM values, and the MNCM estimated by the Gaussian prior model is more accurate than the other two models. This would lead to inaccurate estimation of navigation parameters by the Inv-Gamma or Inv-Wishart models. Therefore, the anti-jamming effect is not as good as the proposed algorithm.
We evaluated the anti-jamming effect of each algorithm from the estimated navigation parameters. The RMSEs of the velocity and position from the existing VBAKF and the proposed algorithm are shown in Figures 11 and 12, respectively. It can be seen from Figures 9 and 10 that all three models can estimate the MNCM values of north velocity. It shows that all three methods can play an anti-jamming role. However, the MNCM values estimated by Inv-Gamma and Inv-Wishart prior models were both smaller than the true MNCM values, and the MNCM estimated by the Gaussian prior model is more accurate than the other two models. This would lead to inaccurate estimation of navigation parameters by the Inv-Gamma or Inv-Wishart models. Therefore, the anti-jamming effect is not as good as the proposed algorithm.
We evaluated the anti-jamming effect of each algorithm from the estimated navigation parameters. The RMSEs of the velocity and position from the existing VBAKF and the proposed algorithm are shown in Figures 11 and 12, respectively. Table 3 shows the RMSEs of the velocity and position from existing VBAKF and the proposed algorithm, respectively. Table 3 show that the proposed algorithm with the prior Gaussian distribution has smaller RMSEs than the existing VBAKF algorithm with the prior distribution of Inv-Gamma or Inv-Wishart. The velocity and position accuracy of the proposed method is 3-4 times and 1-2 times higher than the existing VBAKF method, respectively. It also can be seen that the RMSEs of the velocity and position for the VBAKF based on the prior distribution of Inv-Gamma are similar to those estimated by the VBAKF based on the prior distribution of Inv-Wishart. The latter has a slightly higher accuracy compared with the former.
The simulation results and analyses proved that the prior distribution of MNCM is closer to the Gaussian distribution. Moreover, the correctness of the navigation parameters and effectiveness of the anti-jamming by the proposed algorithm were also proven.
Experiments with Trial Data
Simulation results showed the validity of the proposed method. Then, the feasibility and practicality of the proposed algorithm in engineering was verified by trial data.
The trial data was derived from near the Airport Road. The INS/GNSS integration system was composed of a laser inertial navigation system (LINS) and GNSS. The accuracy of the LINS is 1.5 n mile/h. The output rate is 400 HZ. The position and velocity accuracy of GNSS are 10 m and 5 m/s, respectively. The output rate is 1 HZ. The true value of the navigation parameters are given by the PHINS, which was provided by the IXSEA company. The experimental platform is shown in Figure 13. Table 3 shows the RMSEs of the velocity and position from existing VBAKF and the proposed algorithm, respectively. Table 3 show that the proposed algorithm with the prior Gaussian distribution has smaller RMSEs than the existing VBAKF algorithm with the prior distribution of Inv-Gamma or Inv-Wishart. The velocity and position accuracy of the proposed method is 3-4 times and 1-2 times higher than the existing VBAKF method, respectively. It also can be seen that the RMSEs of the velocity and position for the VBAKF based on the prior distribution of Inv-Gamma are similar to those estimated by the VBAKF based on the prior distribution of Inv-Wishart. The latter has a slightly higher accuracy compared with the former.
The simulation results and analyses proved that the prior distribution of MNCM is closer to the Gaussian distribution. Moreover, the correctness of the navigation parameters and effectiveness of the anti-jamming by the proposed algorithm were also proven.
Experiments with Trial Data
Simulation results showed the validity of the proposed method. Then, the feasibility and practicality of the proposed algorithm in engineering was verified by trial data.
The trial data was derived from near the Airport Road. The INS/GNSS integration system was composed of a laser inertial navigation system (LINS) and GNSS. The accuracy of the LINS is 1.5 n mile/h. The output rate is 400 HZ. The position and velocity accuracy of GNSS are 10 m and 5 m/s, respectively. The output rate is 1 HZ. The true value of the navigation parameters are given by the PHINS, which was provided by the IXSEA company. The experimental platform is shown in Figure 13 As the radio signals fall right into the navigation signal pass band or there is multipath interference caused by buildings, the noise of the GNSS navigation parameters is time-varying. The time-varying interference noise is shown in Figure 3.
The BBVIF anti-jamming algorithm with Gaussian distribution was compared with the VBAKF algorithm with Inv-Gamma or Inv-Wishart distribution by using the trial data. The RMSEs of the velocity and position are shown in Figures 14 and 15 and Table 4. Furthermore, driving trajectories of the anti-jamming algorithms described above and the reference value provided by PHINS are shown in Figure 16. To see the driving trajectories more clearly, we have magnified a corner of the trajectories, and this is shown in the bottom right corner of Figure 16. As the radio signals fall right into the navigation signal pass band or there is multipath interference caused by buildings, the noise of the GNSS navigation parameters is timevarying. The time-varying interference noise is shown in Figure 3.
The BBVIF anti-jamming algorithm with Gaussian distribution was compared with the VBAKF algorithm with Inv-Gamma or Inv-Wishart distribution by using the trial data. The RMSEs of the velocity and position are shown in Figures 14 and 15 and Table 4. Furthermore, driving trajectories of the anti-jamming algorithms described above and the reference value provided by PHINS are shown in Figure 16. To see the driving trajectories more clearly, we have magnified a corner of the trajectories, and this is shown in the bottom right corner of Figure 16. As the radio signals fall right into the navigation signal pass band or there is multipath interference caused by buildings, the noise of the GNSS navigation parameters is time-varying. The time-varying interference noise is shown in Figure 3.
The BBVIF anti-jamming algorithm with Gaussian distribution was compared with the VBAKF algorithm with Inv-Gamma or Inv-Wishart distribution by using the trial data. The RMSEs of the velocity and position are shown in Figures 14 and 15 and Table 4. Furthermore, driving trajectories of the anti-jamming algorithms described above and the reference value provided by PHINS are shown in Figure 16. To see the driving trajectories more clearly, we have magnified a corner of the trajectories, and this is shown in the bottom right corner of Figure 16. The results based on the trial data show that all three methods can complete the task of anti-jamming, which is the same as the simulations. The accuracy of the navigation parameters from the BBVIF with Gaussian distribution is higher than the VBAKF with Inv-Gamma or Inv-Wishart distribution. The velocity and position accuracy of the proposed method is approximately 3 times higher than the existing VBAKF method. It is because of the time-varying noise is more similar to Gaussian. Therefore, the BBVIF with Gaussian distribution estimate is more accurate than the other two algorithms. This experimental result proved that the proposed algorithm is feasible and practical in engineering applications. The results based on the trial data show that all three methods can complete the task of anti-jamming, which is the same as the simulations. The accuracy of the navigation parameters from the BBVIF with Gaussian distribution is higher than the VBAKF with Inv-Gamma or Inv-Wishart distribution. The velocity and position accuracy of the proposed method is approximately 3 times higher than the existing VBAKF method. It is because of the time-varying noise is more similar to Gaussian. Therefore, the BBVIF with Gaussian distribution estimate is more accurate than the other two algorithms. This experimental result proved that the proposed algorithm is feasible and practical in engineering applications. The results based on the trial data show that all three methods can complete the task of anti-jamming, which is the same as the simulations. The accuracy of the navigation parameters from the BBVIF with Gaussian distribution is higher than the VBAKF with Inv-Gamma or Inv-Wishart distribution. The velocity and position accuracy of the proposed method is approximately 3 times higher than the existing VBAKF method. It is because of the time-varying noise is more similar to Gaussian. Therefore, the BBVIF with Gaussian distribution estimate is more accurate than the other two algorithms. This experimental result proved that the proposed algorithm is feasible and practical in engineering applications.
Discussion
Here, we have presented a novel anti-jamming technique for integration based on BBVI. First, KLD was used to prove that compared with Inv-Gamma or Inv-Wishart distributions, the Gaussian distribution with time-varying mean value is closer to the time-varying noise. Accordingly, the prior distribution of the MNCM was assumed to be Gaussian. Second, to solve the problem that the VBAKF cannot be applied when the prior distribution and likelihood distribution are non-conjugate, we proposed the use of the BBVI method, which is based on stochastic optimization, instead of the VI method to estimate the time-varying MNCM. Finally, the validity and engineering practicality of the proposed method were verified by simulations and trial data. This novel anti-jamming algorithm, which deals with time-varying measurement noise provides more accurate estimation and stronger anti-jamming performance than previous methods. Significantly, compared with VBAKF, the algorithm in this paper improves the accuracy of estimation. However, the algorithm is more complex than VBAKF because it needs to calculate the gradient of ELBO and then update the parameters by stochastic optimization. This is where we need to improve in future work. | 2021-05-21T16:56:36.844Z | 2021-04-19T00:00:00.000 | {
"year": 2021,
"sha1": "7bec5aab652a34abeb34d99eee5ea04f1c72ecbd",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-3417/11/8/3664/pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "4746f3a18fecb8a2ab2a9e2d5301087032d10493",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
199777163 | pes2o/s2orc | v3-fos-license | THE EFFECT OF SUSTAINABLE LAND MANAGEMENT TECHNOLOGIES ON FARMING HOUSEHOLD FOOD SECURITY IN KWARA STATE, NIGERIA
: Nigeria is among countries of the world confronted with the food insecurity problem. The agricultural production systems that produce food for the teeming population are not sustainable. Consequently, the use of Sustainable Land Management (SLM) technologies becomes a viable option. This study assessed the effect of SLM technologies on farming households’ food security in Kwara State, Nigeria. A random sampling technique was used to pick 200 farming households for this study. The analytical tools included descriptive statistics, Shriar index, Likert scale, food security index and logistic regression analysis. The results indicated that the average age of the respondents was 51.8 years. The food security index showed that the proportions of food secure and insecure households were 35% and 65% respectively. The binary logistic regression revealed that SLM technologies were one of the critical determinants of food security. An increase in the usage of SLM technologies by 0.106% raised food security by 1%. Other important factors that were estimated included farm income, family size, gender and age of the household head. To reduce the effects of food insecurity, the effective coping strategies adopted by the respondents were reduction in quantity and quality of food consumed, engaging in off-farm jobs to increase household income and using of money proposed for other purposes to buy foods. Governments at all levels should encourage the adoption and use of SLM technologies through both print and electronic media. Policies and strategies towards reducing the household size should be vigorously pursued to reduce food insecurity.
Introduction
Food is the key to life. It represents a large part of typical Nigerian household expenses. Thus, food security is critical to any country of the world. Food security occurs when all people, at all times, have physical, civic and financial means to provide adequate, safe and nourishing food that satisfies their dietary requirements and food choices for an energetic and beneficial life (FAO, 2005). Food insecure and secure households are those whose food intake falls below and above their minimum calorie requirements respectively.
In spite of the available resources and the efforts made by governments at different times, food insecurity remained one of the most significant challenges to Nigeria's economic development (Ifeoma and Agwu, 2014). The cost of food insecurity is substantially high. The poor performance of the agricultural sector deepens the food security problem of the country. Thus, it becomes more pertinent to increase the productivity of the sector. The agricultural sector is expected to create foods for the people. The agricultural production technologies and practices adopted to a greater extent determine whether a farmer will be food secure or not. Knowing the best technologies and practices to achieve this goal is significant (Branca et al., 2013). The disadvantages of the dominant model of agricultural intensification include the increased use of capital inputs and problems of economic feasibility (IAASTD, 2009). Consequently, concern is given to the alternative method of intensification such as the use of SLM technologies. SLM technologies refer to practices and technologies that relate to the management of land, water, biodiversity, and other resources to meet human needs without endangering the ecosystems. The adoption of SLM technologies can lead to improved soil texture and structure as well as it can raise the activity of soil flora and fauna (World Bank, 2006;Pretty, 2011). It can also make farmers less vulnerable to climatic risks. Many studies (Ahmed et al., 2016;Amaza et al., 2008;Omonona et al., 2007;Babatunde et al., 2007) have been carried out to investigate factors influencing food security of households. However, none of these studies have assessed the effect of SLM technologies on household food security. Thus, this study measured food security status, assessed the effect of SLM technologies on food security and described the reliable coping strategies used by the respondents to reduce the effect of food insecurity.
Area of study
The study area was Kwara state. The latitude and longitude of the state are: 8º and 10º north and 3º and 6º east respectively. The state has an area of 35,705 sq kilometers with a population of 193,392,500 people (NPC, 2016). To the west, Kwara state shares the international boundary with the Republic of Benin and to the north, the interstate boundaries with Niger state. It also shares boundaries with Oyo, Osun and Kogi states to the southwest, southeast and east respectively (Figure 1).
The climate consists of both wet and dry seasons each lasting for nearly six months. The raining season starts in April and ends in October while the dry season commences in November and stops in March. Temperatures range from 33°C to 34°C, with the total annual rainfall of about 1,318mm. The main occupation of the people is agriculture. The common crops grown are cassava, millet, maize, okra, sorghum, beniseed, cowpea, yam, sweet potatoes, and palm tree. The state has about 1,258 rural communities and the rural dwellers are the majority. Based on ecological characteristics, cultural practices and project administrative convenience, the state is categorized into four zones by Kwara state Agricultural Development Project (KWADP). These are:
Results and Discussion
Method of data collection and sampling Primary data were gathered using a structured interview schedule. A threestage random sampling procedure was adopted for this study. Two out of the four Shehu A. Salau et al. 206 ADP zones were randomly selected in the first stage. This was followed by a proportionate selection of 20 villages from the two selected zones. Lastly, ten farming households each were picked randomly from the chosen villages to make a total of 200 farming households as shown in Table 1. The state has about 185,000 farm families (KWADPs, 2010). Analytical framework The tool of analysis comprised: descriptive statistics, Likert scale, food security index and logistic regression. The socio-economic features as well as the effective critical strategies adopted by respondents were explained using descriptive statistics. The respondents were further grouped into food secure and food insecure households using food security index. The index is stated as follows: Fi = Per capita food expenditure for the i th household divided by 2/3 mean per capita food expenditure (MPCFE) of all households; where Fi = Food security index, when Fi > 1 = Household is food secure, and Fi < 1 = Household is food insecure. A situation where the per capita monthly food expenditure (PCMFE) of a household is larger or equal to two-thirds of MPCFE the household is food secure. On the other hand, a food insecure household is a situation where the PCMFE is smaller than two-thirds of MPCFE (Omonona et al., 2007). The proportion of food secure/insecure households was estimated using the headcount ratio (H) as follows: (1)
Headcount ratio ,
where M = Proportion of food secure/insecure households, N = Proportion of households in the sample.
To ascertain the effect of SLM technologies on household food security, a binary logistic regression model was employed.
The model is stated as: Z = m o + m 1 X 1 + m 2 X 2 + … + m k X k + u, where Z = Explained variable, The effect of sustainable land management technologies on farming household food security 207 mo = Constant, m 1 , m 2 ,…,m k = Coefficients, X = Explanatory variables, K = Number of explanatory factors, P = Probability, u = Error term. The explanatory factors are: X 1 = SLM technologies which were measured using Shriar index (2005), X 2 = Estimated farm income (N), X 3 = Number of years of schooling (years), X 4 = Household size (adult equivalent), X 5 = Co-operative membership; (COOP) (Yes=1; No=0 for COOP), X 6 = Sex of household head (D=1 for male; D=0 for female), X 7 = Age of the respondents (years). Table 2 shows the different SLM technologies, the scale ranges and their associated weights. Table 2 shows that not all the farming activities could justify 0-3 scaling. From all the activities, the maximum attainable point was 46. The SLM index is given as:
Estimation of Shriar index
(3) where: SLM = Sustainable Land Management technology index for the i th household, S = Scale range for the activities employed by the i th household, and W = Weight of the activities used by the i th household. If a household is engaged in any activity, it gets point 1 and 0 otherwise. The scale range of 0-3 suggests that if the household is engaged in the activity and if so, it does so at low (1 point), medium (2 points), or high (3 points) scale. This classification was based on the percentage of the total area cultivated on which the strategy was employed. Production practices like the use of legumes are more endurable and so attracted the highest weighting of 3.5 (Salau et al., 2011). Intercropping with other crops besides legumes takes the value of 0, for no, and 1 (low), 2 (medium) and 3 (high) levels of activity respectively. The scale range of organic fertilizer application, water management, agroforestry and mulching starts from 0 to 1 -zero for no activity, and 1 if used. The scale of minimum tillage takes the value of 0 for no activity, and 1, 2 and 3 for the use of tractor, animal traction and hoes/cutlass respectively.
To identify the effective coping strategies, a three-point Likert scale was employed. The response options and values assigned were as follows: very effective = 3; effective = 2; and not effective = 1. These values were added and divided by 3 to obtain the mean (2.0). Strategies with mean scores greater and lower than 2.0 will be regarded as effective and not effective respectively.
Socioeconomic characteristics of respondents
The majority (94.5%) of the respondents were males. Based on the culture and tradition of the people, the male respondents usually had more access to farmland when compared with the female respondents. The mean age of the respondents was 51.8 years. This implies that most of the respondents were aged. Age is a critical variable which can affect the ability and agility with which the head meets the food needs of the household. An old household head is more likely to have a larger family size and may lack the energy required to work for the upkeep and sustenance of the family (Table 3).
About 35% of the household heads had access to credit facilities from cooperative societies. Access to credit facilities may affect the type of food eaten and expenses of households. A large (62.5%) proportion of the household heads were literate. Hence, the respondents are supposed to be able to take good decisions which will likely enhance their food security status (Babatunde et al., 2007). The respondents operated at a subsistence level with a mean farm size of 1.5 hectares. The size of farmland cultivated may affect production and food security of the respondents (Akinsanmi and Doppler, 2005). Furthermore, the study revealed that most (62.5%) respondents received between N50, 000 and N100, 000 monthly from agricultural and non-agricultural related jobs respectively. Salau et al. 210 Food security status of farming households The calculated MPCFE was ₦4219.787. Households whose per capita food expenditure fell below and above ₦4219.787 were designated food insecure and food secure households respectively. Hence, 35% and 65% of the farming households were food secure and food insecure respectively (Table 4).
Factors influencing food security of households
The result indicated an R 2 value of 48.1%. This suggests that about 50% of the total variation in the explained variable was accounted for by the explanatory variables. Factors influencing food security were the adoption of SLM technologies, estimated farm income, family size, gender and age of the household head (Table 5). .030** Source: Field survey, 2018; *, **, *** significant at the 1%, 5% and 10% levels respectively.
The coefficient of SLM technologies used was positive and critical at the 1% level. This suggests that the adoption of SLM technologies was an important factor influencing food security in the study area. An increase in the usage of SLM technologies by 0.106% raised food security by 1%. The higher the percentage of SLM technologies adopted, the larger the chance of being food secure. Estimated income is also significant at the 1% level. This implies that the higher the income of the households, the more secure the household is. These findings agree with those of Amaza et al. (2008) and Ifeoma and Agwu (2014). Household size was negative and it was also important at the 1% level of probability. This suggests that larger households may be food insecure. This finding agrees with those of Tilksew and Beyene (2012) and Ifeoma and Agwu (2014). Age of respondents was important at the 5% level, but it had a negative relationship with food security. This indicates that the young respondents were more food secure when compared with the aged ones. An old household head was more likely to have larger household size and may lack the energy required to work for the upkeep and sustenance of the households. Sex of the household head was also negative and important at the 5% level of probability. This suggests that female-headed households may be more food secure than their male counterparts. Surprisingly, education and cooperative participation were not the factors that influenced food security in the area.
Coping strategies employed by households
The most effective coping strategies adopted by respondents to reduce food insecurity included: reduction in quality of food eaten (M=2.06), consuming less preferred foods (M=2.09), using money budgeted for other uses to purchase foods (M= 2.14), doing off-farm jobs to raise income (M=2.12) ( Table 6). This finding agrees with the results of Haile et al. (2005), who have opined that engaging in off-farm and non-farm jobs is necessary for diversification of household income. Other strategies are borrowing food from friends and relatives (M=1.76), borrowing money to purchase food (M=1.81), purchasing food on credit (M=1.72), and lowering the number of people eating in the household (M=1.40). According to Ifeoma and Agwu (2014), household assets could be disposed to purchase food in times of adversity, crop failure and other eventualities.
Conclusion
This study assessed the influence of SLM technologies on household food security in Kwara state, Nigeria. The study indicated that 35% and 65% of the respondents were food secure and food insecure respectively, with an average age of 51.8 years. Furthermore, the adoption of SLM technologies was found to be significant in explaining food security of households in the state. An increase in the usage of SLM technologies by 0.106% increased food security by 1%. Other important determinants estimated were farm income, household size, gender and age of the household head. Moreover, reduction in quality of food consumed, engaging in off-farm jobs to raise income and diversion of funds budgeted for other uses to purchase foods were some of the effective coping strategies used by the respondents in reducing the effects of food insecurity. Consequently, it is recommended that the adoption and use of SLM technologies should be encouraged at local, state and federal levels by sensitizing farmers on the significance of SLM technologies through print and electronic media. Policies and strategies aimed at reducing household size should be formulated and implemented to reduce food insecurity. | 2019-08-16T19:56:18.688Z | 2019-01-01T00:00:00.000 | {
"year": 2019,
"sha1": "159db3a1a72e65aaa565e6638cf99995302c8aa3",
"oa_license": "CCBYSA",
"oa_url": "http://www.doiserbia.nb.rs/ft.aspx?id=1450-81091902203S",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "4a365b738392922f021f110fc736cd7734b3dfd2",
"s2fieldsofstudy": [
"Agricultural and Food Sciences",
"Environmental Science",
"Economics"
],
"extfieldsofstudy": [
"Business"
]
} |
12587052 | pes2o/s2orc | v3-fos-license | Fucoidan Supplementation Improves Exercise Performance and Exhibits Anti-Fatigue Action in Mice
Fucoidan (FCD) is a well-known bioactive constituent of seaweed extract that possess a wide spectrum of activities in biological systems, including anti-cancer, anti-inflammation and modulation of immune systems. However, evidence on the effects of FCD on exercise performance and physical fatigue is limited. Therefore, we investigated the potential beneficial effects of FCD on ergogenic and anti-fatigue functions following physiological challenge. Male ICR mice from three groups (n = 8 per group) were orally administered FCD for 21 days at 0, 310 and 620 mg/kg/day, which were, respectively, designated the vehicle, FCD-1X and FCD-2X groups. The results indicated that the FCD supplementations increased the grip strength (p = 0.0002) and endurance swimming time (p = 0.0195) in a dose-depend manner. FCD treatments also produced dose-dependent decreases in serum levels of lactate (p < 0.0001) and ammonia (p = 0.0025), and also an increase in glucose level (p < 0.0001) after the 15-min swimming test. In addition, FCD supplementation had few subchronic toxic effects. Therefore, we suggest that long-term supplementation with FCD can have a wide spectrum of bioactivities on health promotion, performance improvement and anti-fatigue.
Introduction
Fucoidan (FCD) a well-known bioactive phytocompound of brown seaweed, edible and economic brown algae used in food, feed and energy industries.FCD is a sulfated polysaccharide that contains substantial percentages of L-fucose and sulfate ester groups [1,2].Research articles on FCD have escalated dramatically in the last 20 years.While PubMed shows 179 FCD publications in the decade from 1984 to 1993, there were 370 from 1994 to 2003, and 541 from 2004 to 2013.Numerous investigations revealed that FCD exhibits various biological effects, such as antimetastatic activity by blocking the interactions between cancer cells and the basement membrane [3]; induction of cancer cells apoptosis [4]; anticoagulant activity [5], antii-nflammation [6,7]; antimicrobial activity [8]; and antioxidation [9,10].These findings have stimulated an explosion of investigations on FCD, its bioactivities and its possible role in human health.
Fatigue is a symptom, which is defined as organism physiological fatigue, where operation cannot be maintained and the organs do not remain favorable working conditions.However, fatigue is always difficult to define.This is because of the unique intrinsic properties and anatomic features of individual muscles [11].Exercise leads to changes in metabolism; energy provision; and cardiovascular, respiratory, thermoregulatory, and hormonal responses [12].As a result of demand exceeding capacity in one or more systems, either directly in the active muscles (peripheral fatigue) or the central nervous system (central fatigue), causes fatigue and the termination of exercise [13].It is well documented that oxidative stress, energy source depletion, and excess metabolite accumulation are involved in the occurrence of physical fatigue [14][15][16].Studies have also shown that antioxidants supplementation could prolong exercise performance, reduce metabolite production, and reduce physical fatigue [16,17].Many studies show that FCD possess significant antioxidant activity, both in vitro and in vivo.FCD has great potential for preventing free radical-mediated degradation of DNA in human umbilical vein endothelial cells [18] and protect against chemical-induced oxidative damage in mice [19].However, according to a search of the PubMed database, there are still relatively few studies that directly address the possible ergogenic or anti-fatigue function of FCD.Therefore, the objective of this study was to evaluate the effects of FCD on exercise performances and fatigue-associated biochemical indices according to our previous reports [17,20].
Materials, Animals, and Experiment Design
Fucoidan isolated from Laminaria japonica and was purchased from Wel-Bloom Bio-Tech Corporation (Taipei City, Taiwan).The certificate of analysis for the test material is provided as a Supplementary document.Male ICR strain mice (6 weeks old) with specific pathogen free condition were purchased from BioLASCO (Yi-Lan, Taiwan).All animals were provided with a standard laboratory diet (No. 5001; PMI Nutrition International, Brentwood, MO, USA) and distilled water ad libitum, and housed at 12-h light/12-h dark cycle at room temperature (22 °C ± 1 °C) and 50%-60% humidity.The Institutional Animal Care and Use Committee (IACUC) of National Taiwan Sport University (NTSU) inspected all animal experiments, and this study conformed to the guidelines of protocol IACUC-10206 approved by the IACUC ethics committee.
In this study, the dose of FCD designed for humans is 1.5 g per day.The mouse FCD dose (0.31 g/kg) used here was converted from a human equivalent dose (HED) based on body surface area by the following formula from the US Food and Drug Administration: assuming a human weight of 60 kg, the HED for 1.5 (g) ÷ 60 (kg) = 0.025 × 12.3 = a mouse dose of 0.31 g/kg; the conversion coefficient 12.3 was used to account for differences in body surface area between a mouse and a human as described in our recent study [21].
Twenty-four mice were randomly assigned to 3 groups (8 mice/group) for FCD treatments: (1) vehicle; (2) 0.31 g/kg FCD (FCD-1X); and (3) 0.62 g/kg FCD (FCD-2X).The vehicle group received the same volume of solution equivalent to individual BW and all treatments were given orally to each mouse for a 21-day duration.
Forelimb Grip Strength Test
A low-force testing system (Model-RX-5, Aikoh Engineering, Nagoya, Japan) was used to measure the forelimb grip strength of mice undergoing vehicle or FCD treatments.The detailed procedures were described in our previous report [15].
Swimming Exercise Performance Test
Swim to exhaustion exercise test involved constant loads corresponding to 5% of body weight to evaluate the endurance time as described in our previous study [17].The swimming endurance time of each mouse was recorded from beginning to exhaustion, which was determined by observing loss of coordinated movements and failure to return to the surface within 7 s.
Determination of Fatigue-Associated Biochemical Variables
Effects of FCD supplementation on fatigue-associated biochemical indices were evaluated post-exercise as our previous reports [15,17,20,22].At 1 h after the FCD supplementation, all animals underwent a 15-min swim test without weight loading.After a 15-min swim exercise, blood sample was immediately collected and centrifuged at 1500× g and 4 °C for 10 min for serum separation.Serum lactate, ammonia, glucose and CK levels were determined using an autoanalyzer (Hitachi 7060, Hitachi, Tokyo, Japan).
Histological Staining of Tissues
Target organs were carefully removed, minced and fixed in 10% formalin after sacrifice.All tissues were then embedded in paraffin and cut into 4-μm thick slices for morphological and pathological evaluations.Tissue sections were stained with hematoxylin and eosin (H & E) and examined using a light microscope equipped with a CCD camera (BX-51, Olympus, Tokyo, Japan) by a veterinary pathologist.
Statistical Analysis
All data are expressed as mean ± standard error of the mean (SEM) for n = 8 mice per group.Statistical differences among groups were analyzed by a one-way analysis of variance (ANOVA) and the Cochran-Armitage test for the dose-effect trend analysis with SAS ver.9.0 (SAS Institute, Cary, NC, USA).p values of <0.05 were considered statistically significant.
Effects of FCD on Forelimb Grip Strength
The forelimb grip strength values in the vehicle, FCD-1X and FCD-2X groups were 136, 159 and 165 g, respectively (Figure 1).The forelimb grip strength of the FCD-1X and FCD-2X groups were 1.17-(p = 0.0074) and 1.21-fold (p = 0.0013), respectively; significantly higher than those of the vehicle group.In the trend analysis, absolute forelimb grip strength dose-dependently increased with the FCD dose (p = 0.0002).In general, programmed exercise training is required to increase grip strength [21]; however, the results indicated that FCD supplementation benefited grip strength even though test animals did not undergo a training intervention.Thus, long-term FCD supplementation can benefit grip strength when no training protocol is implemented.Our previous reports have shown that seven to twenty-one days of supplementation with plant extracts, resveratrol, ethanolic extract of deer antler, or long-term supplementation with ergogenic aids, such as whey protein, improves the grip strength of untrained animals [17,[20][21][22].Thus, FCD, a substance of brown algae origin used as an ingredient in some health food, may be an alternative supplement for promoting body strength under an untrained condition or in a programmed-training protocol.
Effects of FCD on Exercise Performance in a Weight-Loaded Swimming Test
Energy metabolism during muscular activity determines the level of physiological fatigue [23].Exercise endurance is an important index in evaluating anti-fatigue treatment.As shown in Figure 2, endurance swimming time in the vehicle, FCD-1X, and FCD-2X groups were 8.0, 14.9 and 14.4 min, respectively.The exhaustive swimming time of the FCD-1X and FCD-2X groups were 1.58-(p = 0.0455) and 1.63-fold (p = 0.0306), respectively; significantly longer than those of the vehicle group.In the trend analysis, endurance swimming time dose-dependently increased with the FCD dose (p = 0.0195).Based on these results, we suggest that FCD improves endurance performance in the absence of training.Therefore, further investigation is also required to elucidate the effects of long-term FCD supplementation combined with exercise training on endurance performance.
Effect of FCD Supplementation on Serum Lactate, Ammonia, Glucose and CK Levels after Acute Exercise Challenge
Post-exercise induced muscle fatigue can be evaluated by important biochemical indicators, including lactate, ammonia, glucose, and creatine kinase (CK) levels, after exercise [24,25].Lactate accumulates when cellular glycolysis exceeds the aerobic metabolic capacity.When lactic acid concentration increases, there is a high level of hydrogen ions accumulation, and leads to fatigue due to acidification [26,27].Therefore, lactic acid was related in the exercise intensity, glycogen storage conditions and fatigue.As shown in the Figure 3A, respective lactate levels in the vehicle, FCD-1X, and FCD-2X groups were 7.9 ± 0.3, 6.4 ± 0.3, and 6.1 ± 0.4 mmol/L; the lactate levels of the mice that received FCD-1X and FCD-2X supplementation were 18.5% (p = 0.0032) and 22.5% (p = 0.0006), respectively; significantly lower than those of mice that received the vehicle treatment.In the trend analysis, serum lactate levels dose-dependently decreased with the FCD dose (p < 0.0001).After acute exercise, the way of relaxing was significantly affected by blood lactate clearance rate.Approximately 75% of the total amount of lactate produced is used for oxidative production of energy in the exercising body, and it could be utilized for the de novo synthesis of glucose in the liver [28].In the present study, FCD supplementation could decrease blood lactate levels and increase the glucose concentration after acute exercise challenge.Therefore, we suggest that FCD supplementation may have potentiation for the removal and utilization of blood lactate after exercise.Ammonia, another important metabolite, accumulates with highly intensive long-lasting exercise and can be reduced by herbal supplementation [23].During energy metabolism for exercise, ammonia is generated by different sources.The immediate source of ammonia production is the purine nucleotide cycle [29], deamination of adenosine monophosphate to inosine monophosphate, and ammonia is the substantially elevated during intensive or prolonged exercise when the ATP utilization rate may exceed the production rate.The other source could be the gluconeogenesis process via deamination of several amino acids produced by proteolysis.The direction of movement of ammonia or ammonium ion depends on concentration and pH gradients between tissues, including the brain.Although exercise-induced ammonia toxicity is transient and reversible relative to disease states, it may affect continuing coordinated activity in critical regions of the central nervous system.As shown in the Figure 3B, serum ammonia levels in the vehicle, FCD-1X and FCD-2X groups were 100 ± 5, 91 ± 9 and 83 ± 3 μmol/L, respectively.Compared with the vehicle group, serum ammonia level was slightly decreased by 17.7% (p = 0.0544) in the FCD-2X group.
Blood glucose level is an important index for performance maintenance during exercise [15,17,20].Serum glucose levels in the vehicle, FCD-1X, and FCD-2X groups were 170 ± 5, 192 ± 12, and 198 ± 5 mg/dL, respectively.Values of the FCD-2X group were 1.16-fold (p = 0.0246); significantly higher than that of the vehicle group (Figure 3C).The trend analysis also showed that serum glucose levels dose-dependently increased with the FCD dose (p < 0.0001).Therefore, continuous supplemented FCD for 21 days could increase energy utilization and improve exercise performance.
Serum CK level is an important clinical biomarker of muscle damage, muscular dystrophy, severe muscle breakdown, myocardial infarction, autoimmune myositides and acute renal failure.High-intensity exercise challenge could physically or chemically cause tissue damage and muscular cell necrosis [30].The serum CK concentration was lower in normal state, while muscle tissue CK activity increased as hypoxia and the accumulation of metabolites during exercise caused by muscle cells damage resulting in decreased exercise performance [31].Serum CK levels in the vehicle, FCD-1X and FCD-2X groups were 800, 704 and 602 mg/dL, respectively (Figure 3D).There was no significant difference among the three groups.Therefore, our study suggested the FCD supplementation for 21 days may not affect the serum CK levels post an acute exercise challenge.
Previous study showed that tumor necrosis factor (TNF) is synthesized in the mechanically when the muscle cells received the stress and probably play an important role to cause the fatigue [32].Recent study further demonstrated that pro-inflammatory cytokines, IL-6 and TNF-α, reduces the intracellular glycogen stock and lead to fatigue [33], and FCD could down regulated the Il-6 and TNF-α levels both in vitro and in vivo [34][35][36].In addition, FCD used in this study was isolated from Laminaria japonica.Previous showed that the backbone of FCD was primarily (1→3)-linked α-L-fucopyranose residues (75%) and a few (1→4)-α-L-fucopyranose linkages (25%).Moreover, the molar ratio of sulfate to fucose content plays an important role on the free-radical scavenging activity of FCD [37].Therefore, we suggest that FCD may have the potential to develop as an ergogenic supplement partly by its anti-inflammation and antioxidant activity.
Subchronic Toxicity Evaluation of FCD Supplementation
Subchronic toxic evaluation of FCD supplementation was evaluated by animals' behavior, dietary, growth, organs weight, clinical biochemistry and histopathology.The vehicle and FCD supplementation groups did not differ in daily behavior during treatment.Morphological data from experimental groups are summarized in the Table 1.There was no significant difference in initial BWs among the vehicle, FCD-1X, and FCD-2X groups.Because we observed a significant increase in the daily intake of diet and water in FCD-fed mice, the effects of FCD on the final BW, and liver, muscle and brown adipose tissue (BAT) mass gain were of primary interest.The food intake and water consumption of the FCD-2X group was significantly higher by 1.07-(p = 0.0009) and 1.15-fold (p = 0.0003), respectively, compared to the vehicle group.Consistent with the food intake data in the FCD-2X group, we found the final BW of FCD-2X group was significant higher compared to the vehicle group (Table 1).The trend analysis showed significant increases in the final BW (p = 0.0003) and food intake (p < 0.0001) with an increasing dosage of FCD supplementation.Therefore, the effect of FCD on increasing the BW was clearly dependent on food intake.In addition, the trend analysis also showed significant increases in tissues weights of the liver (p = 0.0073), muscles (p = 0.0089), and BAT (p = 0.0081) with an increasing dosage of FCD treatment.The relative tissue weight (%) is a measure of different tissue weights adjusted for the individual BW, and there were no significant changes in the relative liver, skeletal muscle (gastrocnemius and soleus muscles), heart, lung, kidney, epididymal fat pad (EFP) or BAT weights (%) among the vehicle, FCD-1X, and FCD-2X groups (Table 1).We also found no gross abnormalities attributed to FCD treatment when weighing organs.
Effect of FCD Supplementation on Biochemical Analyses at the End of the Experiment
In the present study, we observed beneficial effects of FCD on the grip strength, exhaustive exercise challenge and measured other physiological effects with 21 days of FCD supplementation.We further investigated whether FCD treatments with 21 days could cause any negative effect on other biochemical markers of healthy mice.We examined the tissues-and health status-related biochemical parameters and major organs including liver, skeletal muscle, heart, kidney, and lung according to histopathological examinations in FCD-treated mice (Table 2 and Figure 4).Levels of biochemical indices, including ALT, LDH, TBIL, creatinine, UA, TC, and glucose, did not differ among groups (p > 0.05, Table 2).We found that serum AST and CK levels of the FCD-1X group were significantly 17.91% (p = 0.0375) and 40.42% (p = 0.0167) lower than those of the vehicle group.Serum albumin level of the FCD-1X group was significantly 1.04-fold (p = 0.0218) higher than that of the vehicle group.Therefore, the daily supplementation with FCD may have potential for tissues protection and beneficial effects following high intensive exercise.In addition, serum levels of TP, an index of nutritional status, in FCD-1X and FCD-2X groups were significantly 1.07-(p = 0.0027) and 1.05-fold (p = 0.0219) higher than that of the vehicle group.The trend analysis showed significant increases in the serum TP level (p = 0.0297) and food intake (p < 0.0001) with an increasing dosage of FCD supplementation.Therefore, the effect of FCD on increasing the TP was clearly dependent on food intake.
Serum levels of BUN, an important indicator of renal function, in the FCD-1X and FCD-2X groups were 14.78% (p = 0.0016) and 10.22% (p = 0.0207), respectively; significantly lower than that of the vehicle group.The trend analysis also showed that serum BUN levels dose-dependently decreased with the FCD dose (p = 0.0320).L. japonica is a popular marine medicinal plant in China; people use it as a traditional medicine for eliminating edema, a symptom of kidney disease.Consistent with a previous report [38], we found that FCD supplementation could benefit renal function in healthy mice.Moreover, serum levels of TG in the FCD-1X and FCD-2X groups were 28.83% (p = 0.0147) and 58.40% (p < 0.0001), respectively; significantly lower than that of the vehicle group.Serum TG levels dose-dependently decreased with FCD supplementation, with significance on trend analysis (p < 0.0001).Previous study showed that water-soluble polysaccharides could decrease serum TC and TG levels by increasing fecal neutral steroids and bile acid excretion [39].FCD is classified as a water-soluble polysaccharide that is considered a dietary fiber.Our data is consistent to a previous study that FCD could remarkably reduce the levels of blood lipids of hyperlipidemic rats [40].Therefore, we suggest that FCD may have potential to develop as therapeutics for reducing blood lipids.
Effect of FCD Supplementation on Histological Examinations at the End of the Experiment
On morphological observation, the arrangement of sinusoid and hepatic cords in liver showed no changes with FCD treatment (Figure 4A).The gastrocnemius muscles exhibit polygonal myofibers of uniform shape and size without rhabdomyolysis (Figure 4B).Hypertrophy and hyperplasia were not observed in heart cardiomyocytes (Figure 4C).The structure of renal tubules and glomerulus did not differ among treatments (Figure 4D).In addition, all animals showed typical tissue architectures of lung alveoli on H & E staining (Figure 4E).In a previous study, Wistar rats of both sexes were exposed to FCD at a dose of 300 mg/kg body weight/day.No mortality or other signs of toxicity were observed during six months of observation [41].Furthermore, our histopathological examinations revealed that FCD supplementation for 21 days yielded no adverse effects in major organs such as the liver, skeletal muscle, heart, kidney and lung.Therefore, the dose of FCD supplementation used in this study was safe.
Conclusions
Fucoidan have anti-fatigue activity by decreasing plasma lactate and ammonia levels and increasing serum glucose, thereby advantaged exercise performance in mice.In this study, we found that 21 days FCD supplementation without training significantly improved the forelimb grip strength and the swimming time to exhaustion of test animals.In biochemical indices, exercise-induced fatigue-related parameters including lactate, ammonia, glucose, were positively modulated by FCD supplementation in a dosage-dependent manner.In addition, FCD showed beneficial effects on the lipid profile, and liver and renal functions.Many studies demonstrate the FCD have great antioxidant activity and immune functions [34,37].According the previously-mentioned research and our study, FCD could be developed into an anti-oxidant agent, blood lipid-reducing supplement, and we suggest that FCD may be a potential ergogenic aid against abnormal metabolite accumulation and to increase utilization of important fuel source (glucose).In conclusion, our study provides experiment-based evidence to support anti-fatigue function of FCD supplementation and suggests a use for FCD as a potential ergogenic and anti-fatigue agent.
Figure 1 .
Figure 1.Effect of FCD (Fucoidan) supplementation on forelimb grip strength.Data are presented as the mean ± SEM of 8 mice in each group.One-way ANOVA was used for the analysis.Different letters (a, b) indicate a significant difference at p < 0.05.Low-dose (FCD-1X) and high-dose (FCD-2X) FCD at 310 and 620 mg/kg/day.
Figure 2 .
Figure 2. Effect of FCD (Fucoidan) supplementation on swimming exercise performance.Mice were pretreated with the vehicle, FCD-1X, FCD-2X of FCD for 21 days and, then 1 h later performed an exhaustive swimming exercise with a load equivalent to 5% of the mouse's body weight attached to its tail.Data represent the mean ± SEM (n = 8 mice).One-way ANOVA was used for the analysis.Different letters (a, b) indicate a significant difference at p < 0.05.Low-dose (FCD-1X) and high-dose (FCD-2X) FCD at 310 and 620 mg/kg/day.
Figure 3 .
Figure 3. Effects of FCD (Fucoidan) supplementation on serum levels of lactate (A); ammonia (B); glucose (C); and CK (D) after an acute exercise challenge.Data represent the mean ± SEM of eight mice in each group.Columns with different letters (a, b) differ significantly, p < 0.05 by a one-way ANOVA.Low-dose (FCD-1X) and high-dose (FCD-2X) FCD at 310 and 620 mg/kg/day.
Table 1 .
General characteristics of the experimental groups.
Values are the mean ± SEM for n = 8 mice in each group.Values in the same line with different superscripts letters (a, b) differ significantly, p < 0.05 by one-way ANOVA.Food efficiency ratio: BW gain (g/day)/food intake (g/day).Muscle mass includes both gastrocnemius and soleus muscles in the back part of the lower legs.BW: body weight; BAT: brown adipose tissue; EFP: epididymal fat pad; FCD: Fucoidan.Low-dose (FCD-1X) and high-dose (FCD-2X) FCD at 310 and
Table 2 .
Biochemical analysis of the FCD treatment groups at the end of the experiment. | 2016-03-01T03:19:46.873Z | 2014-12-31T00:00:00.000 | {
"year": 2014,
"sha1": "37783a0719ed412595b5d6da954f06d6299f2b8d",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2072-6643/7/1/239/pdf?version=1420019226",
"oa_status": "GOLD",
"pdf_src": "Crawler",
"pdf_hash": "c9878948373d91a0c5d0324ebcdbea1223eabd19",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
269465996 | pes2o/s2orc | v3-fos-license | Construction and Analysis of Collaborative Educational Networks based on Student Concept Maps
Network Analysis has traditionally been applied to analyzing interactions among learners in online learning platforms such as discussion boards. However, there are opportunities to bring Network Analysis to bear on networks representing learners' mental models of course material, rather than learner interactions. This paper describes the construction and analysis of collaborative educational networks based on concept maps created by undergraduates. Concept mapping activities were deployed throughout two separate quarters of a large General Education (GE) course about sustainability and technology at a large university on the West Coast of the United States. A variety of Network Analysis metrics are evaluated on their ability to predict an individual learner's understanding based on that learner's contributions to a network representing the collective understanding of all learners in the course. Several of the metrics significantly correlated with learner performance, especially those that compare an individual learner's conformity to the larger group's consensus. The novel network metrics based on collective networks of learner concept maps are shown to produce stronger and more reproducible correlations with learner performance than metrics traditionally used in the literature to evaluate concept maps. This paper thus demonstrates that Network Analysis in conjunction with collective networks of concept maps can provide insights into learners' conceptual understanding of course material.
INTRODUCTION
The Computer-Supported Collaborative Learning (CSCL) community has frequently turned to Network Analysis (often referred to as Social Network Analysis) to supplement traditional pedagogical assessments in characterizing learner understanding [33,49,56,66].Network Analysis in CSCL is typically performed over webs of nodes (usually representing individual learners), and edges (usually representing interactions or relationships between learners).The idea behind this kind of analysis is that individual actors are embedded in webs of relationships with other actors, and that these webs can be analyzed to gain insights about a particular actor based on that actor's ties to others [7].CSCL seeks to understand the relationships between learning, learner interaction, and digital technologies; as such, Network Analysis is a natural fit for CSCL due to its ability to characterize these relationships in a quantifiable way [32,47].Various studies have provided evidence that analyzing learner interaction with Network Analysis techniques can provide insights such as identifying learner roles [55,65], assessing learner problem-solving abilities [74], and understanding how learners make sense of complex systems [16].It has been argued that using Network Analysis over such a network of learners and their interactions is key to understanding how learning occurs [18,73].
However, CSCL has not fully taken advantage of the power of Network Analysis for investigating learner cognition.Cognitive Science research indicates that the cognitive processes in our brains form complex systems that help us solve problems, and that Network Analysis has enormous potential to model and investigate these processes [60,61].Unfortunately, current CSCL Network Analysis methodologies do not typically seek to construct or analyze networks of the cognitive models of learners.The current work takes a step towards advancing the subfield of Network Analysis in CSCL by taking advantage of insights gleaned from the application of Network Analysis in the field of Cognitive Science to model learner conceptions themselves as a network.
This study moves the conversation away from Social Network Analysis and towards a form of Epistemic Network Analysis [58] in which, rather than analyzing online interactions between learners in a discussion board setting, the focus is on analyzing learners' mental models about the central themes of a course.Learner concept map submissions are merged together to form a collaboratively-constructed collective network that can be seen as the entire class's consensus about the course material.Each individual learner map is then assessed based on its conformity to the course-wide consensus.This methodology provides several benefits, including 1) measuring what a learner knows at a given moment and how their knowledge changes over time, 2) understanding what the learners collectively know and how this collective understanding changes over time, and 3) understanding how an individual learner's mental model compares to the consensus of the larger group.
The work in this paper is novel in that it uses a group consensus-based approach to evaluate individual learners, based on the relative positioning of the learners' contributions within the collective network.The motivation for doing so is based on past results indicating that a group's collective mental model can approximate that of an expert [4,26,38].The specific contributions of this work are 1) a novel methodology for collecting, merging, and analyzing concept maps generated by learners, and 2) an empirical comparison of a variety of novel and traditional metrics for evaluating learners concept maps using data from two offerings of an undergraduate course.
BACKGROUND AND RELATED WORK 2.1 Network Analysis in CSCL
Network Analysis research in CSCL settings has examined whether network metrics can be indicative of a learner's performance.Traditionally, this has been done by studying unimodal networks of learner interactions, often generated automatically from collaborative online learning platforms such as discussion boards [18,56].Unimodal networks are those which have only one type of network actor, such as learners.These networks are often constructed using digital trace data such as log files documenting online interactions amongst learners or between learners and teachers [2,39,53].These works often argue that learners who engage more with other learners are likely to perform better, and this is tested by comparing the centrality of a learner's position within a network of learners to some external performance metric [8].The majority of studies found strong correlations between the centrality of a learner in a network of interactions and learner performance [13,51,62], though at least one found little to no correlation [47].
Although creating and analyzing social interaction networks is convenient and accessible, focusing solely on this type of data limits the understanding that can be gleaned about learning patterns in CSCL.These types of learner interaction networks are generally, though not always, unimodal.A unimodal network of learners does not contain information on what content was discussed, only who discussed it; thus, it can be difficult to track learning about specific topics using this approach.
A small amount of past work has focused on networks connecting learners with course topics (i.e., bimodal networks).Agarwal and Ahmed [3] parse learner editing of Wiki pages to create a bimodal network of learners and pages, in order to assess learner collaboration and engagement.Likewise, Kim et al. [35] construct a bimodal network representing both which learners interacted with each other and which course topics they discussed.This approach has the advantage of providing information about more than just with whom learners interacted, additionally including information about which specific topics and themes from the course learners focused on.
A pair of literature surveys from the past decade have pointed out the lack of diversity of network actors in CSCL Network Analysis research.These surveys have made calls to expand the breadth of actors and types of relational ties used, as well as increase the diversity of the metrics used to analyze the networks and correlate the results with learner performance or learning outcomes [11,18].
Concept Mapping
The phrase "concept mapping" is generally used to refer to Novakian concept mapping [9].Novakian concept mapping is an educational activity in which learners create network-like representations of the relationships between related concepts [46].For example, learners might map out their understanding of the Solar System by including concepts for planets, moons, and the Sun, and utilizing the relationship "revolves around" to connect the concepts in a logical manner.
Novakian concept mapping is often used to elicit an individual learner's mental model within a particular domain and/or to track individual learning [19,45,48,52].Novakian concept maps benefit both learners and instructors.For instance, they help learners focus on the connections between course topics [15,20,63] as well as integrate knowledge across modules of a given course [14].Additionally, they help instructors understand learner conceptions of course material, which is critical to improving teaching and achieving learning objectives [21,40,69].
Another technique commonly referred to as "concept mapping" is Group concept mapping.In contrast to Novakian concept mapping, Group concept mapping is a mixed-methods technique in which participants (often domain experts) brainstorm concepts related to a specific domain and sort these concepts into categories.Multidimensional Scaling (MDS) is applied to the participants' submissions in order to automatically generate a map of concepts displayed in two-dimensional conceptual space [54].In contrast to Novakian concept mapping, Group concept mapping does not seek to elicit the mental model of an individual nor generate relationships between concepts.It is instead concerned with the emergent properties of the algorithmically-generated group map, which may be used as a base of knowledge about the domain [54].
In this paper, learners constructed Novakian concept maps that are merged together to form a collective network.For clarity, future references in this paper to "concept maps" or "concept mapping" without a qualifier can be assumed to refer to Novakian concept maps.
Concept maps can be analyzed based on both content and structure.Often, the structural analysis of concept maps occurs in a qualitative manner; for example, maps can be visually classified using various structural templates such as "network", "chain", "tree", etc. [5,36].It is also common to analyze maps via simple quantitative metrics such as the number of concepts or the number of relationships present in the map.More intricate concept map evaluation methods also exist; for example, Biswas et al. [6] deploy a teachable automated agent that takes a pre-programmed quiz based on student concept map submissions; this allows students to receive real-time feedback on their maps.
Network Analysis and Concept
Mapping.Both Group and Novakian concept maps are highly compatible with Network Analysis methodologies, due to their inherent network structure as well as their emphasis on the importance of relationships between entities.McLinden [44] notes that, while the goals of concept mapping and Network Analysis differ, the underlying data structure is the same.Although there are many studies exploring Network Analysis on Group concept maps [29,44,68,71], there are far fewer exploring Network Analysis on Novakian concept maps.However, Network Analysis has the potential to quantitatively characterize individual Novakian concept map structure, which can inform the educator if the course goals are being met [60].
A common assumption underlying this claim is that the structure of a learner's concept map approximates the learner's level of understanding about a topic.Specifically, certain network structures are thought to be more indicative of expert-level knowledge, while others are more indicative of novice-level knowledge [38,59].Intuitively, those with more knowledge about a domain have a richer, more interconnected mental model than those with less knowledge about the domain.
In spite of the informative value of using Network Analysis with learner concept maps, only a small quantity of work has applied such techniques.In one such case, Siew et al. [59] analyze individual learner concept maps about Psychology using network metrics such as Average Shortest Path Length (ASPL) and Clustering Coefficient (CC), finding that these metrics were able to predict learner performance on quizzes.In another, Koponen and Nousiainen [38] create a collective network by merging 12 individual learner concept maps together, and use centrality metrics to identify key nodes in the network; however, they did not use the collective network to estimate individual learner performance.Schwendimann [57] uses a centrality metric to track changes in learner understanding of certain expert-determined "indicator concepts" via iterative concept mapping.The findings of Markham [43] show that maps about biological knowledge of mammals created by novices exhibited fewer hierarchical levels and fewer edges between concepts than maps created by experts.
Collaborative Concept
Mapping.The subfield of Computer-Supported Collaborative Concept Mapping (CSCCM) defines collaborative concept mapping as two or more individuals using an online system to collaboratively work together on constructing one or more concept maps as a tool to facilitate shared understanding and construct knowledge [24,27,37,42].Studies have shown the effectiveness of this approach in facilitating problem solving [22,64].
An alternative, less common approach to collaborative concept mapping involves learners working on concept maps individually, after which the individual maps are merged together to create a representation of the collective mental model of the participants [12,17,38].A slight variation on this approach allows learners to optionally "share" specific elements from their individual map to the collective representation [50].One benefit of this approach is that the collective representation is an aggregation of each individual's mental model on the topic.Thus, the contribution of each individual can be identified and analyzed.
Concept Mapping as Consensus
Forming.Concept mapping activities with multiple participants can be viewed as a consensus forming activity, in which the prior ideas of individual learners are built upon via group communication [28] and the final map reflects the agreement of multiple mental models [25].In some cases, a group consensus on concept map content and structure can be formed through social phenomena such as negotiation [41] or mediated via online learning tools [12].In other cases, consensus is formed via merging individual learner maps without direct social interaction between the participants.For instance, Koponen and Nousiainen [38] find that knowledge of physics concepts was highly dispersed among learners, but a collective network aggregating each individual learner's map nearly perfectly matched a similar map created by an expert.
In this paper, we hypothesize that learners whose maps align more closely with the group consensus will have better understanding of the course material, and therefore higher performance.In order to test this hypothesis, several novel centrality and consensus-based metrics are compared against traditionally-used metrics, based on their ability to evaluate individual learner performance.
Deployment of Assignments
Concept mapping assignments were deployed in two separate iterations of the same undergraduate course at a large US research university over the course of two academic quarters, enrolling a total of 679 participants.The course is a general education course about the intersection of sustainability and technology, and enrolls students from a wide variety of schools, majors, and academic standing within the university.Throughout both iterations, learners were asked to make concept maps by creating nodes (representing concepts) and linking them with edges (representing relationships).For example, learners might propose that "biodiversity increases sustainability."By doing this repeatedly, learners created networks of related concepts and their relationships.
For both of the quarters, a dataset was generated containing the statements in each individual learner concept map along with a score for that map.Scores were assigned on a per-statement basis, either by a member of the research team or a team of Teaching Assistants (TAs).Statements were awarded a score of 1 if they were correct and relevant to the course material, and a score of 0 if they were incorrect or irrelevant to the course material.The learner's final score was the percentage of statements they included in their concept map that received a score of 1.
In Spring '21, learners used the freely available software CmapTools [10] to construct their concept maps, and in Fall '21, learners used a custom concept mapping tool created by the research team.Figure 1 shows examples of two concept maps, one being constructed in CmapTools and the other in the custom tool.
The differences between the assignments across both quarters are summarized in Table 1. 1 Team of TA's scores all relationships after performing consensus building activity 3.1.1Spring '21.The first deployment of the concept mapping assignment occurred in the Spring '21 iteration of the course.To ensure consistency and avoid the use of synonyms between maps, the assignment restricted learners to use concepts from the list of all English Wikipedia article titles and relationships from a list of 20 provided by the instruction staff.Learners were also required to include the two central themes of the course, "sustainability" and "technology", as nodes in their maps.This quarter focused on learners improving their maps iteratively.In the first assignment, learners created an initial map, while in the second and third assignments, learners revised their maps based on course material introduced since the previous revision.The data analyzed for this study was taken from the third and final concept mapping submission.A member of the research team assigned scores to each statement, which were used to calculate the final per-learner scores.
In order to better focus the analysis on learners' original contributions, the two required nodes ("sustainability" and "technology") were removed from the collective network before the Network Analysis metrics were calculated.
Fall '21.
In the second deployment, learners were again required to use Wikipedia articles as concepts.However, as the instructor wanted to examine the ways that learners constructed causality networks, the only allowed relationship was "causes".Learners were not required to include any specific nodes in their submissions; thus, this quarter's assignment was slightly less restrictive in terms of possible concepts but much more restrictive in terms of permitted relationships.
In this quarter, there was no revise-and-resubmit process as in the previous quarter; the data analyzed is from the learners' first and only submissions.Learners submitted the assignments midway through the quarter.Each individual statement was assigned a score of 0 or 1 via a TA review process, in which the TAs first participated in a consensus forming activity to establish shared criteria and then were assigned an anonymized spreadsheet containing a portion of the learner statements to review and score.
Merging of Individual Concept Maps into a Collective Network
By merging individual learner concept maps together, one can form a network that exhibits the class's collective understanding of the material.However, in order to create a collective network, a merge strategy must first be selected.
Such a strategy must define how duplicate edges between individual maps will be handled (i.e., whether to weight an edge by the number of learners reporting it or represent all edges as unweighted).A weighted strategy places higher values on edges added by more students, while an unweighted strategy allows for the possibility that individual students made unique but valuable edges not included by other students.
A merge strategy must also define whether the directionality of the relationships will be preserved.For example, say that a learner argues that Climate Change -> causes -> Sea Level Rise.A directed merge strategy would preserve the fact that Climate Change "points" at Sea Level Rise.An undirected merge strategy would omit this information, and merely note that these two nodes were connected by at least one learner.
A directed merge strategy preserves the most information from the original concept map; however, an undirected merge strategy acknowledges that the semantics of the relationship label affect the directionality in an arbitrary way.For example, if the relationship label "causes" were to be replaced by "caused by", the directionality of every relationship using this label would be reversed, but this says nothing about the actual relationship between the two nodes, only the semantics.By using an undirected strategy, we remove the bias introduced by the semantics of the relationship label.
When creating the collective networks, only the presence or non-presence of an edge between two nodes was included; the specific label chosen by learners for each relationship was ignored.This was done in order to allow for more overlap between individual learner maps in the collective network, and because, from a network perspective, the structure of the connections learners made between concepts was more interesting than the specific semantics they used to make the connections.
For example, if one learner included the statement Climate Change -> synonym -> Global Warming, and another learner included the statement Climate Change -> causes -> Global Warming in their respective maps, the collective map would contain one of four possible edges based on the merge strategy selected: 1) Climate Change -> Global Warming with a weight of 2 for the directed/weighted strategy, 2) Climate Change -> Global Warming with no weight for the directed/unweighted strategy, 3) Climate Change -Global Warming with a weight of 2 for the undirected/weighted strategy, and 4) Climate Change -Global Warming with no weight for the undirected/unweighted strategy.Here, "->" implies a directed edge and "-" implies an undirected edge.Figure 2 shows the four types of merge strategies used in the analysis.Due to differences in assignment requirements and scoring between the two quarters, combining both quarters of data into a single collective network was problematic.Instead, individual collective networks were created for each quarter.
Analysis of Maps
Analysis of the learner concept maps is broken down into several categories: Categories annotated with a single asterisk (*) are traditional Network Analysis metrics.Categories annotated with a double asterisk (**) are novel metrics using the methodology of merging individual learner concept maps into a collective network.The NetworkX Python library [31] was used to carry out the analyses, aside from the qualitative metrics.Note that only Comprehensiveness (in the qualitative metrics category) considers the actual content of the concept map; all of the other metrics are based either on individual map structure (traditional metrics) or the positioning of individual learner maps within the collective network (novel metrics).
Qualitative Metrics.
Qualitative metrics (also referred to as holistic metrics) are one of the most common forms of concept map evaluation.Evaluation of such metrics is typically carried out by an instructor or expert looking at a learner's concept map and assessing its merit based on content and/or structure, but without performing any counting or computation.Besterfield-Sacre et al. [5] define a rubric for several qualitative concept map metrics, including Comprehensiveness (the degree to which a map's content covers the relevant material) and Organization (orderliness of the arrangement of nodes and edges).Additionally, Yin et al. [72] define five common structural templates (Structural Form) that can be used to classify the structures of learner concept maps: "linear", "circular", "hub-spoke", "tree", and "network".The "network" structural template is characterized by a web of interconnected concepts, and is considered more indicative of meaningful learning than the other structural templates [36].Structural form of concept map One of "linear", "circular", "hub-spoke", "tree", "network" For the qualitative metrics, one member of the research team reviewed every individual learner concept map from both academic quarters and assigned each a score for Comprehensiveness, Organization, and Structural Form.Table 2 describes the qualitative metrics used in this analysis.One issue with qualitative analysis of concept maps is that such analysis is subjective in nature.For instance, many maps share characteristics of two or more of the structural templates, leaving it up to the scorer to make a judgment call.
Individual concept map metrics.
The quantitative analysis of the individual learner concept maps includes both simple counting-based metrics such as number of concepts and number of relationships, as well as traditional network metrics for concept map analysis: Average Shortest Path Length (ASPL), Clustering Coefficient (CC), Network Density, and Complexity.ASPL and CC are metrics that can be used to identify Small-World Networks (SWNs).SWNs are characterized by highly clustered groups and short path lengths from any given node to another.These types of networks are representative of many types of real-world systems [70].Past research has hypothesized that learners who build concept maps that conform to the characteristics of SWNs have a more complete understanding of the course material.Specifically, Siew [61] found that learners with lower ASPL scores and higher CC scores performed better on quizzes.
Figure 3 shows the structure of the individual learner concept maps with the highest and lowest scores for the metrics ASPL, CC, and Density from the Fall '21 quarter.This figure demonstrates that such metrics can be used as a quantitative alternative to the typically qualitative task of characterizing concept map structure.For example, both high ASPL and low CC scores are associated with what would qualitatively be labeled as a "chain" structure, whereas low ASPL and high Density scores reflect more of a "network" structure in the qualitative coding.Another category of individual concept map metric is based on identifying hierarchy within a concept map.Besterfield-Sacre et al. [5] define a hierarchy as a shortest path from a root concept (a concept with no parents) to a leaf concept (a concept with no children).They define three hierarchybased metrics that are used for quantifying concept map structure: number of hierarchies in the map, length of the deepest hierarchy, and number of cross-links between hierarchies.A higher total number of hierarchies, deeper hierarchies, and higher number of cross-links are associated with more complex concept map structure.
Table 3 contains descriptions of each of the individual metrics.
Centrality metrics.
The centrality-based metrics measure whether learners are able to identify key nodes, as determined by their peers.Individual learner scores are calculated based on the average centrality of the nodes in the collective network that were part of their individual concept map.Three centrality metrics were calculated: Betweenness, Degree, and Closeness (described in Table 4).Figure 4 shows the positioning of two individual learner networks (red edges) within the collective network of all the learner concept maps from the Fall '21 quarter (blue edges).While one learner's individual concept map is positioned in the center of the collective network (left), the other learner's map passes through the center but mostly lies on the periphery of the collective network (right).We hypothesized that learners who made maps that are positioned more centrally within the collective network will have higher scores on the concept map assignment.
Consensus metrics.
While the centrality metrics measure learner identification of key nodes, this section explores four metrics that instead measure learner identification of the correct edge structure between nodes.Specifically, these metrics compare an individual learner contribution to the local region of the collective network pertaining to the same nodeset (i.e., the set of concepts included in the individual learner's concept map).Thus, they measure the "edge overlap" that an individual learner concept map shares with the maps of other learners.
These metrics are important because global centrality metrics that consider the entire collective network do not account for how a learner's choice of nodeset may limit their centrality scores.Consider a learner who makes a high quality concept map but leaves out a single key node that was referenced by the majority of the other learners.This would substantially reduce their centrality score, which is based on which nodes a learner included.The consensus-based metrics introduced in this section do not directly penalize a learner for omitting popular nodes.Instead, they assess the ways that other learners collectively connected nodes within the same local network region, allowing for learners who chose to include a more diverse nodeset to still potentially receive high scores.Weighted sum of all paths between pairs of nodes Higher number is more overlapping Table 5 shows descriptions of each of the four consensus metrics.
Edge Consensus
Edge Consensus assesses how well a learner conforms with the general agreement of the class based on whether pairs of nodes are connected via edges.Each edge in the concept map of an individual learner is assigned a score based on the number of other learners that also included that edge in their map.The final score is calculated from the summed score of all of the edges in their map, divided by the total number of edges they included.The simplicity of this metric makes it easy to calculate, even on large networks.
We hypothesized that Edge Consensus will positively correlate with individual learner map quality, as it rewards learners who share a common understanding of the course material with their classmates.While the metric does not directly reward learners for including popular nodes, it does reward learners for including popular edges, which will indirectly lead to learners who included popular nodes receiving higher Edge Consensus scores.
Subgraph Coverage
Calculating Subgraph Coverage involves creating a subgraph of the collective network defined as the set of edges added by all of the learners between the nodes referenced in an individual learner's map.The number of edges in the individual map is then compared with the number of total edges in the subgraph.The intuition behind this metric is to evaluate the edge coverage that an individual learner has made over their selected nodeset compared to the rest of the class.
It is hypothesized that higher Subgraph Coverage will be associated with higher individual concept map quality.In theory, learners who were able to cover more of their selected subgraph will be demonstrating a more thorough understanding of the concepts in the subgraph than learners who covered less of the subgraph.
Collective Shortest Path
The Collective Shortest Path metric assesses the ways that other learners collectively represented the relationships present in each individual learner's concept map.For each individual concept map, a modified collective network is created based on all of the other learners' concept maps but excluding the current learner's map.Then, for each edge connecting a pair of nodes in the learner's map, the shortest path between this pair of nodes in the modified collective network is calculated.If another learner added the same edge, then the shortest path is 1; if a path does not exist, then the longest shortest path in the collective network is assigned as the score for the edge in question.
The Collective Shortest Path metric rewards learners who included edges also included by other learners, which is also the case in the Edge Consensus metric.However, Collective Shortest Path also rewards learners who made unique connections between nodes that are in adjacent regions of the collective network, even if no other learner directly connected those nodes.Conversely, it punishes learners that connected nodes that would otherwise have been far away from each other in the collective network.
The guiding principle behind this metric is that learners may include unique edges that are still of high quality; in fact, unique edges may actually provide an important bridge between concepts that would otherwise not have been connected.However, we argue that unique edges of high quality are more likely to occur between two nodes that are already close together, whereas unique edges of low quality may bridge two nodes that otherwise would have been far apart.Following from this, we hypothesized that learners who receive a lower Collective Shortest Path score will have higher quality concept maps.
Communicability
Unlike the other three metrics in this section, Communicability is borrowed from previous literature [23]; however, its application to assigning individual learner scores based on their contributions to a collective network is novel.It uses the same modified collective network as Collective Shortest Path; however, in contrast to Collective Shortest Path, which only considers the shortest alternative path between each pair of nodes, Communicability applies a weighted sum of the lengths of all alternative paths in the collective network between every pair of nodes in each individual student network, such that longer paths are weighted less heavily than shorter paths.It rewards learners that connected nodes that were reachable by a wide variety of other paths, indicating that such learners made connections in important parts of the collective network.Therefore, we hypothesized that a higher score will lead to higher quality concept maps.Figure 5 is a visualization showing a simple calculation of each of the three consensus metrics introduced in this paper (excluding Communicability, which is detailed in [23]).
Summary Statistics
Summary statistics for the collective networks are shown in Table 6.Learners in the Spring '21 quarter made larger networks than in the Fall '21 quarter, in terms of both nodes and edges.This is likely due to differences in the assignment instructions; learners had one edge label to choose from in the fall, vs. 20 in the spring.However, Network Density and Average Node Degree are similar between the networks, indicating that the structure of the two collective networks is similar.The average learner score differs greatly between quarters due to different scoring methodologies; the TA grading team assigned much higher learner map scores than the research team member, leading to a much higher average score for the Fall '21 quarter than the Spring '21 quarter.
Table 7 shows summary statistics for the qualitative metrics.Learner maps in the Spring '21 quarter scored higher for both Comprehensiveness and Organization than maps for the Fall '21 quarter.The Spring '21 quarter data also contained far more learner maps coded as a "network" structure than Fall '21 (88.5% of all maps vs 27.9% of all maps).Summary statistics for quantitative metrics are shown in Table 8.The mean, standard deviation, and range of each metric output are shown.
Correlation Results
To evaluate the performance of each metric, the statistical correlation between each network metric and the concept map score is calculated on a learner-by-learner basis.The Pearson correlation coefficient was used, except in the cases of the qualitative network structure metric, which is a categorical variable.To handle the statistical challenge of regression over a categorical and a continuous variable, this metric was broken out into five separate Boolean-valued metrics representing each of the five structural templates (i.e., "tree", "chain", etc.), and the point-biserial correlation was used.
In the tables below, each cell contains the Pearson or point-biserial correlation coefficient representing the correlation between the network metric and the learner concept map scores, as well as the p-value of the correlation.Due to the large number of statistical tests run in this analysis, we apply Sidak-Holm adjustment to account for the Family-Wise Error Rate [30].Based on this result, we consider p < .0006 to be a statistically significant result.Such results are shown in bold.Results meeting the standard threshold of statistical significance of p < .05are marked with an asterisk (*).
Traditional Metrics from the Literature.
Qualitative Metrics: The qualitative results in Table 9 show that, in spite of their frequent use in past literature [34], the qualitative metrics are not highly correlated with learner performance on the concept mapping assignment, with the exception of Comprehensiveness for the Fall '21 quarter, which addresses the scope of the content contained within the map rather than the structure of the map.Individual, Non-Hierarchy-based Metrics: The results of the quantitative metrics in Table 10 show that the individual, non-hierarchy-based metrics correlated with learner performance for the Spring '21 quarter but not for the Fall '21 quarter.Unexpectedly, CC and Density both correlated negatively with learner performance, where positive correlations were hypothesized in both cases.This can be explained by the observation that, for the dataset in question, CC and Density are both positively correlated with each other (p < .0001)and both are negatively correlated with number of concepts (p < .0001),while number of concepts was strongly positively correlated with learner performance.The general thinking behind using CC and Density to analyze concept maps is that they provide an easy-to-compute method to characterize the complexity of the network; however, a confounding factor is that a map with fewer nodes can lead to higher CC and Density scores, but also be indicative of lower learner effort.Individual, Hierarchy-based Metrics: The individual, hierarchy-based results in Table 11 show that, while some of these metrics correlate with learner performance, none of the results are reproduced across quarters.The number of hierarchical levels was a useful predictor of performance in the Fall '21 quarter, and the number of hierarchies was highly correlated with performance for Spring '21.In general, the traditional metrics from the literature do not excel at predicting learner performance across the two datasets used in this study.While there are some isolated statistically significant correlations, none of the results for any of the traditional metric categories were reproduced across both quarters of data.These results motivate the investigation of metrics that move beyond assessing individual learner conceptions in isolation to assessing them in relation to the conceptions of their peers.
Metrics based on Collective Network.
For the metrics in this subsection, there are four rows for each academic quarter, representing the results of the various merge strategies: unweighted/directed (UD), weighted/directed (WD), unweighted/undirected (UU), and weighted/undirected (WU).
Centrality Metrics: Table 12 shows the results of the centrality metrics.In general, the centrality metrics produced stronger correlations for the Fall '21 quarter, where the majority of the results were statistically significant.Consensus Metrics: Table 13 shows the results of the consensus metrics.These metrics produced a large number of statistically significant correlations with learner performance, including many significant results.As hypothesized, Edge Consensus and Communicability produced positive correlations with student score across both datasets, while Collective Shortest Path produced negative correlations.Interestingly, Subgraph Coverage produced strong positive correlations, despite negative correlations being hypothesized.In this metric, a lower score indicates higher coverage, and higher coverage implies more overlap with other learners.This means that concept maps with higher coverage contained lower quality content, an unexpected result.
However, an important downside of the Subgraph Coverage metric is that it has the potential to penalize learners that included popular nodes.Popular nodes are more likely to have many edges between them, so it is more difficult for an individual learner to cover all of these edges.Conversely, in the extreme case where a learner creates a concept map with a nodeset completely distinct from the nodes included by all other learners, this learner would achieve a perfect coverage score.This introduces a confounding factor in the Subgraph Coverage metric, indicating that the metric needs to be refined before it can be considered useful.On the dataset used in this paper, most of the novel metrics were better indicators of student performance than the traditional metrics.The results from the novel metrics were also more consistent across both academic quarters than the results from the traditional metrics.The consensus-based metrics were overall the best predictors of student map quality, although the centrality metrics were also relatively useful predictors.In contrast, the qualitative metrics and individual concept map metrics were less reliable predictors.
The simplicity of the qualitative and individual map metrics may partially explain their past use for concept map analysis tasks; however, these metrics consider each student's concept map individually and do not consider the interplay between an individual student's conceptions and the course's collective conceptions of the material.Creating a collective network of merged student concept maps enables a variety of novel metrics that treat the entire course's collective understanding as a proxy for expert understanding, providing a knowledge base against which individual student contributions can be compared.
An additional distinction between the traditional and novel metrics is that the traditional metrics typically make the assumption that a learner's performance is tied to the structural shape of their concept map, irrespective of the selected topics.In contrast, the novel metrics assign a score based on the positioning within the collective network of the concepts and relationships that a student included, without considering the structure of the individual student map.The results from the novel metrics indicate that a learner's correct selection of key nodes (centrality) as well as correct selection of specific edges between them (consensus) should be considered valuable indicators of learner performance.
While the novel metrics in this paper introduce a new framework for concept map evaluation, they are not perfect indicators of student performance and are not intended to replace traditional metrics.Past research has shown that many of the traditional metrics also provide predictive value.Particularly, it is widely agreed that concept maps demonstrating expert-level understanding more commonly assume a "network"-like structure with high interconnectivity between concepts, which can be assessed via qualitative evaluation or traditional network metrics such as ASPL.This type of evaluation is not enabled by the novel metrics, as these metrics only assess individual structure of the map in terms of its relation to the collective network.
Rather than replacing traditional metrics, the novel metrics introduced in this paper should be viewed as viable accompaniments to previous methodology.In many cases, the best results may be obtained from an ensemble of different metrics.As our own results show, the efficacy of various metrics depends largely on the data itself; however, practically no studies have tried to apply these metrics to a diverse set of concept map data from various educational contexts.Such a study is needed to better determine the generalizability of these metrics.
Future Work
This paper is in part a response to calls from the CSCL community to explore more diverse forms of network actors, relational ties, and metrics within the Network Analysis space.This call is answered via a Network Analysis of learner concept map data, a nascent but intriguing area of investigation.While initial results are promising, this work can be improved or extended in a few key areas.
For one thing, both of the datasets used for this study had different characteristics, largely due to the instructor varying the assignment parameters between quarters.These differences caused changes in the effectiveness of the metrics, especially the individual concept map metrics.This observation points to the importance of aligning one's metrics with respect to particulars of the concept mapping assignment parameters.As concept mapping assignments vary widely, further research is needed to pinpoint the most appropriate set of metrics for each variation or category of assignment.
An open question about this research is how well the findings of this paper can translate to other classroom settings.For instance, courses about technical subjects such as programming or math contain more abstract concepts than the course about sustainability used in this study; it is unclear how well the metrics introduced in this paper could aid instructors in predicting the understanding of learners in such courses.It will be important to collect data across various domains and levels of education in order to properly address concerns about the scalability of such metrics beyond the single course used in this analysis.
Our findings indicate that the novel metrics introduced in this paper have potential as predictive indicators of learner understanding of course material.Such information could be useful to instructors in a variety of different ways.For instance, instructors could use the results of these metrics in order to identify potentially struggling learners without having to assess each learner's concept map qualitatively.Additionally, characterizing what learners know at a given moment in time could help instructors to assemble project or study groups based on overlaps (or non-overlaps) in knowledge or interest.
There is also potential to use the collective network of merged learner concept maps as a pedagogical tool.For example, learners could research key nodes and/or edges that they overlooked but that other learners included, and write up a description of the nature of the missed content.Another possibility is to allow learners to browse an online visualization of the collective network to promote open-ended reflection.This experience allows learners to reflect on how their individual conceptual understanding connects to those of their peers.Having learners actively participate in the process of network construction, with each learner or group of learners taking responsibility for one piece of the final network, is in line with past CSCW research that emphasizes the human role in knowledge management [1], and can also be viewed as a form of Collaborative Sensemaking [67].While the concept mapping assignments themselves are performed individually, by collecting mental models from learners in a standardized format, these assignments enable downstream collaboration and interplay with existing resources [26].
Finally, the current work does not address potential issues introduced by varying levels of concept specificity, a generally challenging task in concept mapping.Future steps to address this issue may take the form of enhancing network visualizations to emphasize chains of intermediary nodes between two broad concepts, or using external resources such as Wikipedia to suggest potential alternatives for concepts that are too broad to be informative in a given context.
CONCLUSION
Network Analysis is shown to be a promising technique for analyzing the concepts and relationships in collaboratively-constructed educational networks.Typically, these networks have been composed of learners and their interactions, and it has been shown that learners occupying central positions within these networks tend to have better learning outcomes.This paper answers the call for analyses of more diverse types of networks within the CSCL space, and at the same time contributes to the relatively unexplored area of Network Analysis over Novakian concept map data.In particular, this paper introduces the novel methodology of merging learner concept maps into a collective network and using this collective network to calculate centrality and consensus-based metrics for individual learners.When evaluated on two academic quarters of concept mapping data, these novel metrics are shown to be more significantly correlated with learner performance and more reproducible across datasets than the metrics traditionally used to evaluate concept maps in the literature.
While it is clear that network metrics can, in some sense, predict learner understanding of the material, it is also clear that the complexity of concept map-based networks will require more work to understand which metrics (or combinations of metrics) are the best predictors of learner conceptual understanding in a given context.Moving forward, it will be important to perform such studies across larger and more diverse datasets, in order to further evaluate the suitability of these metrics, and Network Analysis as a whole, for predicting learner understanding.
ACKNOWLEDGMENTS
This work was supported by the National Science Foudation award number 2121572.
Fig. 1 .
Fig. 1.The upper screenshot shows part of one learner's concept map submission from the Spring '21 quarter in the CmapTools interface.The lower screenshot shows an example concept map created by a research team member using the custom concept mapping interface used for the Fall '21 quarter.
Fig. 2 .
Fig. 2. Two example learner concept maps merged into a collective network using each of the 4 merge strategies used to conduct the analysis.The networks with blue (top left) and red (bottom left) nodes represent each of the two learner maps, and the networks with purple nodes (four most rightward) represent the 4 possible collective networks.
Fig. 3 .
Fig. 3.The learner networks with the highest and lowest scores for three of the individual network metrics from the Fall '21 quarter, showing a diversity of network structures in individual learner submissions
Fig. 4 .
Fig. 4. Two individual learner concept maps from Fall 2021 layered in red over the course-wide collective network in blue.The learner map on the left exhibits a relatively high Node Betweenness score, whereas the map on the right exhibits a relatively low Node Betweenness score.
Fig. 5 .
Fig. 5.An example individual learner map (left, blue), alongside a collective network that includes the individual learner map (second from left, red), and the calculation of three novel consensus metrics (three most rightward, purple).In the three purple graphs showing the calculations, red arrows indicate edges in the individual learner map, black arrows indicate edges in the collective network not present in the individual learner map, and the dotted red arrow indicates an edge that is present in the individual learner map that was not made by any other learner.These examples use the weighted/directed merge strategy.For Edge Consensus, the weights of the relevant edges in the collective network are summed.For Subgraph Coverage, the individual learner map contains 3 of the 7 edges (highlighted in red) that the collective network contains between the nodeset referenced in the individual learner map.For Collective Shortest Path, 2 of the 3 edges in the individual learner map were referenced by at least one other learner, leading to scores of 1 for those edges.The final edge is assigned a score of 2 as the algorithm found a shortest path from A->D through the node C.
Table 1 .
Quarter-by-Quarter Concept Map Assignment Details
Table 2 .
Summary Descriptions of Qualitative Metrics
Table 3 .
Summary Descriptions of Individual Network Metrics
Table 4 .
Summary Descriptions of Node Centrality Network Metrics
Table 5 .
Summary Descriptions of the Consensus Network Metrics
Table 6 .
Summary Statistics for Quarter-by-Quarter Collective Networks.Here, the "Number of Statements" column is the total number of statements in all of the individual learner submissions, whereas the two "Number of Edges" columns show the number of edges remaining in the collective network once directed or undirected merge strategies have been applied.
Table 7 .
Summary Statistics for Qualitative Metrics
Table 8 .
Summary Statistics for Quantitative Metrics.The Mean, Standard Deviation (SD), and Range (RNG) of all metric outputs are shown, broken down by quarter.The Mean is represented as the first number in each cell, while the SD and RNG are shown in parenthesis.
Table 10 .
Individual, Non-Hierarchy-based Metrics Results.Note that Number of Concepts is not listed for Fall '21 because learners were required to include a fixed number of nodes for this quarter.
Table 13 .
Consensus Metrics Results.Note that only weighted merge strategies were applied to Edge Consensus, as this metric can only function in the presence of an edge weight.Also note that only the unweighted, undirected strategy was applied to Communicability, due to the NetworkX implementation of this metric.This paper presents several novel Network Analysis metrics for analyzing learner concept maps based on merging individual maps into a collective network.Correlations are computed between centrality and consensus-based metrics derived from this collective network as well as from several categories of metrics traditionally used in the concept mapping literature. | 2024-05-01T15:33:38.057Z | 2024-04-17T00:00:00.000 | {
"year": 2024,
"sha1": "410e5f4d03871e0acd4078876ba853262fa35b03",
"oa_license": "CCBY",
"oa_url": "https://dl.acm.org/doi/pdf/10.1145/3637313",
"oa_status": "HYBRID",
"pdf_src": "ACM",
"pdf_hash": "e2a589102130894d1317aebf57bcaa7e5f880b20",
"s2fieldsofstudy": [
"Education",
"Environmental Science",
"Computer Science"
],
"extfieldsofstudy": []
} |
232246322 | pes2o/s2orc | v3-fos-license | Controlling Cancer Cell Behavior by Improving the Stiffness of Gastric Tissue-Decellularized ECM Bioink With Cellulose Nanoparticles
A physiologically relevant tumor microenvironment is favorable for the progression and growth of gastric cancer cells. To simulate the tumor-specific conditions of in vivo environments, several biomaterials engineering studies have investigated three-dimensional (3D) cultures. However, the implementation of such cultures remains limited because of challenges in outlining the biochemical and biophysical characteristics of the gastric cancer microenvironment. In this study, we developed a 3D cell printing-based gastric cancer model, using a combination of gastric tissue-specific bioinks and cellulose nanoparticles (CN) to provide adequate stiffness to gastric cancer cells. To create a 3D gastric tissue-specific microenvironment, we developed a decellularization process for a gastric tissue-derived decellularized extracellular matrix (g-dECM) bioink, and investigated the effect of the g-dECM bioink on promoting the aggressiveness of gastric cancer cells using histological and genetic validation methods. We found that incorporating CN in the matrix improves its mechanical properties, which supports the progression of gastric cancer. These mechanical properties are distinguishing characteristics that can facilitate the development of an in vitro gastric cancer model. Further, the CN-supplemented g-dECM bioink was used to print a variety of free-standing 3D shapes, including gastric rugae. These results indicate that the proposed model can be used to develop a physiologically relevant gastric cancer system that can be used in future preclinical trials.
INTRODUCTION
Gastric cancer is the fourth most common cancer, and the second most common cause of cancerrelated death worldwide (Spolverato et al., 2015). In Western countries, more than 80% of patients that are diagnosed with advanced gastric cancer have poor prognosis. As a result, the 5-year survival rate for this disease is under 30% (Roukos, 2000). To date, surgical therapy is the only approach that completely eliminates local tumors; however, the opportunity to remove a patient's tumor is often lost, as diagnosis occurs too late (Spolverato et al., 2015). Patients with advanced stage gastric cancer receive chemotherapy as well as adjuvant or neoadjuvant therapy. Although this approach achieves improved therapeutic effects, survival rates remain unsatisfactory because of the tumors' high drug resistance (Yuan et al., 2017).
As the progression and growth of gastric cancer is influenced by the tumor microenvironment (da Cunha et al., 2016;Jang et al., 2018), establishing a physiologically relevant microenvironment is increasingly important in in vitro study. In particular, the extracellular matrix (ECM) surrounding cancerous growths regulates cellular functions such as migration and proliferation, through both cell-cell and cell-ECM interactions, which further affects cancer progression and aggressiveness (Crotti et al., 2017). Moreover, a decellularized tissue ECM (dECM) provides a tissue-specific microenvironment for the cells, and directs cellular behavior in cancerous growths (Hoshiba, 2019;Ferreira et al., 2020). Although several naturallyderived biomaterials such as collagen and Matrigel have been used for mimicking the cancer system (Jang et al., 2017), these purified materials find it difficult to recreate the substrata of their intrinsic environment (Tian et al., 2018). In this respect, development of biomaterials can provide cancer-specific microenvironmental components and compositions, which are essential in regulating in vivo-like cellular behaviors.
Recently, several studies have demonstrated that decellularized extracellular matrixes promote cancer cell behavior (Rijal and Li, 2017;Jin et al., 2019); a lung-derived decellularized ECM enabled the demonstration of cancer cell proliferation, with its morphological differences inducing the aggregation of cancer cells (Tian et al., 2018). Furthermore, through its control of the integrin-mediated pathway, ECM stiffness has a high potential to regulate the activation of cancer cell signaling (Seewaldt, 2014); with an increase in matrix stiffness, the promotion of integrin β1 clustering and the activation of β-catenin were observed, leading to an escalation of invasion and metastasis behaviors. Diverse attempts have been made to achieve sufficient mechanical strength for bioengineered matrixes, including increasing hydrogel concentration, or reinforcing the material by adding cellulose nanoparticles (CN), which are the most widespread natural material that have biocompatible characteristics (Jang et al., 2018;Athukoralalage et al., 2019). However, these approaches are yet to be studied in detail for the development of biochemically and biophysically related materials. Tissue-specific biomaterials and the regulation of matrix stiffness are crucial, as they can enable a more comprehensive assessment of gastric cancer cell responses by simulating the real microenvironment.
In this study, we introduce a mechanically reinforced bioink, consisting of gastric dECM (g-dECM) and CN, that models a biochemical microenvironment characteristic of gastric cancer. Moreover, CN enables the modulation of matrix stiffness, thereby achieving improved biophysical features. In addition, using a three-dimensional (3D) cell printing system, we fabricated 3D structures, including a mimic of a gastric ruga, using cellladen bioink. Finally, we observed enhanced cancer-related characteristics such as cell aggregates, cellular interactions, and drug resistance in the developed bioink, compared with Matrigel and collagen.
Decellularization of Gastric Tissue
Fresh porcine gastric tissue was obtained from a butcher shop (Pignara). Before starting the decellularization process, the mucosa layer was removed from the porcine gastric gland, cut into approximately 0.5-mm-thick slices, and washed with distilled water for 1 h to remove any remaining blood. The sliced tissues were then rinsed in a 25 mM 1 wt% sodium dodecyl sulfate (Thermo Fisher Scientific) solution for 24 h, and 25 mM 1% Triton X-100 solution (Sigma-Aldrich) for another 24 h, to remove residual cells. The tissues were subsequently treated in PBS for 24 h to wash the chemical detergents, and sterilized in 0.1 w/v% peracetic acid solution for 1 h. Following this, they were washed with PBS and distilled water for 30 min. Thereafter, decellularized gastric tissues were deep frozen at −80 • C and lyophilized for 48 h. A g-dECM pre-gel solution was prepared by digesting 200 mg of the ground g-dECM powder in a solution of 0.5 M acetic acid (DUKSAN) supplemented with 20 mg of pepsin (Sigma-Aldrich), and stirring vigorously for 72 h. The biochemical characteristics of the g-dECM were evaluated using the remaining DNA, collagen, and glycosaminoglycan (GAGs), as described previously (Pati et al., 2014).
Before using the g-dECM bioink in experiments, the pH was adjusted to 7.4 by adding a 10 M NaOH solution, for thermal gelation. The g-dECM bioink and NaOH solution was preserved in ice during this pH adjustment process, to prevent gelation before use.
Preparation of Cellulose Nanoparticles
The aqueous suspensions of CN were prepared using a modified protocol from the literature (Kumar et al., 2017). In brief, acid hydrolysis was performed by stirring microcrystalline cellulose (MCC, Sigma-Aldrich) with a 64 wt% H 2 SO 4 solution at 45 • C for 60 min. This reaction was quenched with the addition of cold distilled water. The chilled solution was centrifuged several times, and dialyzed in distilled water, using snake skin (Thermo Fisher Scientific), to remove the acidic solution. The prepared aqueous suspensions of CN were stored at 4 • C for further use.
Transmission Electron Microscopy
The morphology of the CN was examined by transmission electron microscopy (TEM, JEM-1011, Jeol). The aqueous suspensions of CN were diluted to 0.1 wt% and dropped onto the surface of a thin carbon film-coated copper grid. The sample was dried overnight, following which, TEM analysis was performed at an accelerating voltage of 100-120 kV.
Preparation of CN-Supplemented g-dECM Bioink
To prepare the CN-supplemented g-dECM bioink (CN-g-dECM bioink), CN solution was added to the 2% g-dECM bioink in a 1:40 ratio. The final concentration of the CN-supplemented bioink was varied in the range 0.01-0.5 wt% by adjusting the dilution of the aqueous CN solution prior to combination with g-dECM. This combination was mixed by applying over 70 cycles of gentle pipetting, to ensure the distribution of CN in g-dECM was uniform. In addition, to characterize the effect of CN on cellular behavior, we also created a g-dECM bioink control without CN (0% CN-g-dECM) for use in experiments.
Rheological Characterization
The rheological properties of the g-dECM bioink were characterized using a rheometer (DHR-2, TA Instruments) with a 20 mm-diameter plate. To determine its viscosity, a steady shear sweep analysis of the pre-gel bioink was performed at 15 • C. Dynamic frequency sweep examinations were performed to analyze the material's frequency-dependent storage (G) and loss (G ) moduli at a 2% strain in the range 0.1-100 rad s −1 after incubation for 30 min at 37 • C.
Rheological assessment of the CN-g-dECM bioink was performed similarly; dynamic frequency sweeps were conducted to measure the material's frequency-dependent storage (G) and loss (G ) moduli at a 2% strain in the range 0.1-100 rad s −1 after incubation for 30 min at 37 • C, and treatment with 100 × 10 −3 M calcium chloride (CaCl 2 ) solution.
2D/3D Cell Culture
Gastric cancer cell lines (AGS, SNU-1, and KATO3, Korean Cell Line Bank, South Korea) were cultured in RPMI 1640 (Gibco) supplemented with 10% FBS (Gibco) and 1% penicillin/streptomycin (Gibco). For the 3D cell culture, each cell line was encapsulated in g-dECM bioink, CN-g-dECM bioink, collagen, and Matrigel (Corning). The cell-printed g-dECM bioink, collagen, and Matrigel were fabricated and gelated by incubating at 37 • C for 30 min. The cell-printed CN-g-dECM was crosslinked via treatment with 100 × 10 −3 M CaCl 2 solution and then incubated with the printed structure at 37 • C for 30 min. Every cell-laden hydrogel was refreshed with a cell culture medium every other day and harvested for further analysis.
Cell Viability Assay
Cell viability was evaluated by staining with Calcein AM and ethidium homodimer-1 solution (LIVE/DEAD Viability/cytotoxicity Kit, Thermo Fisher Scientific) following the instructions provided by the manufacturer.
Quantitative Polymerase Chain Reaction (qPCR)
The total RNA from collected hydrogels was isolated using the GeneJET RNA Purification Kit (Thermo Fisher Scientific) following the manufacturer's instructions. Complementary DNA (cDNA) was synthesized using the Maxima First Strand cDNA Synthesis Kit (Thermo Fisher Scientific) according to the manufacturer's instructions. Gene expressions were then analyzed with SYBR Green PCR Master Mix and StepOnePlus real-time PCR system (Applied Biosystems). The fold changes of the target genes were calculated using the 2 − Ct method by normalizing them with the housekeeping gene (GAPDH) expression. Coding sequences for GAPDH, matrix metalloproteinase-2 (MMP2), catenin beta-1 (β-catenin), and integrin beta-1 (integrin β1) were designed using the National Center for Biotechnology Information reference sequences ( Table 1) and Primer Express software v3.0.1 (Thermo Fisher Scientific) for preparing primers.
Histological Analysis
To perform hematoxylin and eosin (H&E) staining, all cell-laden hydrogels were incubated in 10% buffered formalin solution for 30 min, washed three times with PBS, and fixed with paraffin. The hydrogels were subsequently sectioned to 30 µm slices using a Reichert-Jung 2035 microtome and placed on glass slides. The sections were immersed in xylene I and xylene II solution for 5 min each to remove the paraffin, then immersed in 100, 95, 80, and 75% ethanol solution, for 3 min each, and rinsed with distilled water for 5 min, for hydration. Then, sections were placed in hematoxylin solution for 10 min, washed with running tap water for 2 min, and placed in 1% acid alcohol solution for 5-30 s. To complete staining, the sections were immersed in eosin solution for 2 min, washed with running tap water for 1 min, and placed in ethanol solutions (70, 95, and 100%) for 2 min each, for dehydration. Finally, sections were immersed in xylene I and xylene II solution for 5 min each, and the glass slides were sealed with a coverslip, using Permount Mounting medium (Thermo Fisher Scientific). The H&E-stained samples were visualized with a microscope. The sizes of the cell aggregates in H&E images were measured using the analysis tools in ImageJ software version 1.47 (National Institute of Health, United States). The size of cell aggregates in an experimental group was subsequently calculated as the average size of the measurement from three different samples.
Immunostaining
To perform immunofluorescence staining experiments, all cellladen hydrogels were fixed with 10% buffered formalin solution for 30 min, washed three times with PBS, and permeabilized with 0.1% Triton X-100 in PBS for 15 min. Next, hydrogels were stained with Alexa Fluor TM 594 phalloidin and 4,6-diamidino-2 phenylindole (DAPI) (Thermo Fisher Scientific) and examined with a laser confocal microscope (Leica TCS SP5 II).
3D Cell Printing Using the Gastric Cancer Cell-Laden g-dECM Bioink
To print the in vitro gastric cancer structures, we used a previously developed extrusion-based 3D cell-printing system named the Integrated Composite tissue/organ Building System (ICBS) (Supplementary Figure S1; Kim et al., 2017). The bioinks were prepared by encapsulating gastric cancer cell lines (cell concentration: 5 × 10 6 cells mL −1 ) with 2% g-dECM pre-gel solution into each hydrogel. A grid pattern, rectangular shape, and gastric ruga shape were manufactured using the in-house developed 3D cell printing system with the 2% g-dECM bioink.
The printing was performed at 15 • C using a 300 µm nozzle, and the speed of the pushing motion was regulated in the range 20-70 kPa using the Nano Master SMP-III (Musashi Engineering, Ltd.). All printed structures were incubated at 37 • C for 30 min and refreshed with a cell culture medium every other day.
Statistical Analysis
In this paper, statistical data is expressed as mean ± standard error. The Student's t-test was conducted to compare two different experimental groups, whereas one-way analysis of variance was performed to compare more than two different experimental groups. These procedures were followed by post hoc analysis using Tukey's multiple comparisons test. Values were considered significant at * p < 0.05, * * p < 0.01.
Preparation and Characterization of the Gastric Decellularized Extracellular Matrix-Derived Bioink
We successfully developed processes for the preparation of g-dECM from native gastric tissue (Figures 1A,B). Our method for developing g-dECM removes cellular material from tissue while minimizing ECM loss and damage. This was validated through a DNA quantification assay, which determined that <37 ng/mg of dsDNA remained in the g-dECM, 2.7 ± 0.3% of the quantity in native tissue, whereas the collagen and GAG concentrations in the g-dECM were 173 ± 3% and 80 ± 3% of the contribution to the content of native tissue ( Figure 1B). For effective decellularization, the quantity of cellular components should be less than 3% of the native tissue, and less than 50 ng/mg in the dECM (Pati et al., 2014). These results thus indicate that we effectively decellularized gastric tissue while preserving ECM components. For cell culture, the pH of the g-dECM bioink is adjusted using NaOH (Figure 1C). When incubated at 37 • C for 30 min, the pH-adjusted g-dECM bioink showed a hit-induced sol-to-gel transition in response to temperature changes. Before performing 3D cell culture, we measured the shear viscosity and storage/loss modulus of pH-adjusted g-dECM bioink to ensure its suitability for extrusion-based printing, and verify the shape retaining ability of printed structures. Both 1 and 2% g-dECM bioink showed a shear thinning behavior, wherein the viscosity of the bioink decreased as the shear rate increased (Figure 1Di). Such shear thinning behavior is vital for 3D cell-printing techniques, because it enables the dispersal of the bioink during printing. Further, after incubating at 37 • C for 30 min, the storage modulus was higher than the loss modulus for both bioinks, indicating that they can retain their shape (Figure 1Dii), a critical factor for the fabrication of 3D cell-printed constructs (Pati et al., 2014). However, the 2% g-dECM demonstrated higher mechanical stability metrics than the 1% g-dECM bioink, suggesting that it is more suited to 3D cell culture.
To evaluate the toxicity of the developed g-dECM bioinka fundamental aspect of developing biomaterials (Stoddart, 2011a,b)-we examined the cell viability of the g-dECM bioink with reference to that of collagen. In these experiments, we encapsulated 3D printed cell constructs using AGS, a gastric cancer cell line, in the pH-adjusted g-dECM bioink, and in collagen. Over 95% cell viability was observed with both groups on day 14 (Figure 1E), indicating that the g-dECM bioink is non-cytotoxic, given its similar response to the Type I collagen hydrogel.
3D Printing of Gastric Cancer Cells
3D cell printing is a promising tool for fabricating arbitrary shapes, and placing cells in designated locations simultaneously (Pati et al., 2014;Yi et al., 2019;Kim et al., 2020). To confirm its suitability for 3D cell printing, we measured the fidelity of shapes created using cell-laden g-dECM bioink (Figure 2A), demonstrating that it could print a pre-designed grid and rectangular patterns. Further, the gastric ruga pattern designed to mimic the shape of gastric tissue in the macro scale, was printed accurately. As it has been shown that the shear force during printing can damage the cells and reduce cell viability (Derby, 2012), we verified the cell viability after 3D cell printing. The viability of KATO-III in the printed structure was found to be sufficiently high (>95%) (Figures 2B,C) 1 and 7 days after cell printing, demonstrating that the developed bioink can be used not only for fabricating complex structures, but also for culturing various types of gastric cancer-related cells.
Response of Cellular Behavior Based on the Microenvironment
An in vitro aggregated cancer model that mimics the more realistic in vivo conditions (Adenis et al., 2020) has been demonstrated. In addition, in vitro cancer models that support tissue-specific function and cell aggregation using a decellularized tissue ECM have also been reported (Rijal and Li, 2017;Tian et al., 2018). As the cellular function and drug resistance of cancers are related to cell aggregation (Jianmin et al., 2002;Zhang et al., 2010), we hypothesize that the g-dECM bioink can model the more aggressive characteristics of gastric cancer.
To verify this hypothesis, we examined the presentation of fundamental gastric cancer cell behaviors in the g-dECM bioink, using Matrigel and collagen as representative controls. Here, Matrigel, composed of basement membrane components from tumor cell/tissues (Benton et al., 2014), was selected as it is the most widely used material for modeling the cancer environment. Conversely, Type I collagen-a biomaterial obtained from natural ECM components-was selected as the negative control for cancer behavior, as it shows high biocompatibility and is widely used for developing tissue models (Che et al., 2006;Yip and Cho, 2013). To ensure all experiments were conducted in identical conditions to Matrigel, which has a protein concentration of approximately 10 mg/ml, the concentrations of the g-dECM and collagen were set at 1 w/v%. Histological analysis was performed to study the morphological behavior of the KATO-III gastric cancer cell line, which is derived from gastric signet ring cell carcinoma (Takeuchi et al., 2012). Interestingly, although signet ring cells were observed in all three groups, both H&E staining ( Figure 3A) and confocal imaging ( Figure 3B) indicate that cells only aggregate in the 1% g-dECM bioink. No cell aggregates were observed in either Matrigel or collagen, suggesting that the g-dECM bioink is more effective in inducing cancer cell aggregation.
The expression of the tissue remodeling marker (MMP2), the cell-cell interaction marker (β-catenin), and the cell-ECM interaction marker (integrin β1), which are involved in gastric cancer cell aggregation and are used to characterize the aggressiveness of cancer cells, were also investigated. We observed that the expression of MMP2, β-catenin, and integrin β1 was significantly higher with the 1% g-dECM bioink than with the Matrigel and collagen (Figure 3C). These results indicate that gastric cancer cells showed more aggressive characteristics in the g-dECM than in the other biomaterials. As cell adhesion molecules can play a crucial role in therapeutic resistance, we conducted drug tests by collating the response of cells in each biomaterial to 5-FU. Here, three different gastric cancer cell lines (SNU-1, KATO-III, and AGS) were encapsulated in each hydrogel, and cultured for 2 weeks. Then, 0-1,000 µM 5-FU was added for 2 days. As expected based on the increased expression of the marker genes, the IC 50 values were higher in the 1% g-dECM group, with 6 and 2.7-fold increases noted for KATO-III, 1.8 and 1.3-fold increases noted for SNU-1, and 2.4 and 22.4-fold increases noted for AGS, in comparison to the values in Matrigel and collagen, respectively ( Figure 3D). These figures indicate that culturing in the dECM bioink increased the drug resistance of gastric cancer cells. Thus, the proposed g-dECM bioink showed favorable feasibility for further applications in modeling gastric cancer.
Regulating Cancer Behavior Using the Stiffness of the g-dECM Bioink
Cancerous growths are usually observed in stiffer tissue environments than the environments of normal tissues. Hence, it has been surmised that cancer cellular behavior is regulated based on ECM stiffness (Gkretsi and Stylianopoulos, 2018;Kalli and Stylianopoulos, 2018). In a previous in vitro study, to modulate the cancer cells, ECM stiffness was controlled by changing the protein density or the degree of hydrogel crosslinking. These controls subsequently activated cancer cell behavior, such as enhancing the integrin-ECM adhesion of plaque mechanosensors (Gauthier and Roca-Cusachs, 2018). Thus, we hypothesize that behavior of gastric cancer cells can be upregulated by increasing the concentration of the g-dECM bioink.
To investigate this, we compared the behaviors of KATO-III and SNU-1 cells encapsulated in 1% g-dECM bioink with the behavior of the same gastric cell lines encapsulated in 2% g-dECM bioink. H&E staining showed that with both cell lines, the size of the cell aggregates were larger in the 2% g-dECM bioink than in the 1% g-dECM bioink (Figures 4A,B). From Figure 1Dii, under 1 radS −1 , the storage modulus of the 1% g-dECM was 129.8 ± 54.7 Pa, whereas the storage modulus of the 2% g-dECM ink was 376.6 ± 156.1 Pa under 1 radS −1 . This result thus indicates that the higher ECM stiffness caused the gastric cancer cells to form larger aggregates.
Further, we observed the expression of the cancer-related markers, MMP2, β-catenin, and integrin β1, which are associated with matrix stiffness and correlated with cancer cell invasion and metastasis (Karamichos et al., 2007;You et al., 2015). As expected, the levels of MMP2, β-catenin, and integrin β1 were upregulated with increased g-dECM bioink stiffness ( Figure 4C); in the 2% g-dECM bioink, KATO-III showed significantly higher expressions of all three markers, whereas SNU-1 showed a significantly higher expression of β-catenin, and more modest increases in MMP2 and integrin β1 expression. Thus, the g-dECM matrix stiffness regulates remodeling gene expression, demonstrating that control of the aggression of gastric cancer cells is feasible.
Effects of Cellulose Nanoparticles on Regulating the Mechanical Properties of g-dECM Bioink and Cellular Behavior
In the previous section, it was demonstrated that increasing the density of g-dECM bioink stimulated more aggressive gastric cancer behavior. However, the 2% g-dECM bioink formulation is the maximum concentration achievable. Hence, to enhance its mechanical properties and provide a more biophysically reliable gastric cancer environment, we investigated the use of a cross-linker in addition to the bioink. Cellulose has been identified as a promising biopolymer with remarkable biological properties such as biocompatibility, biodegradability, and low toxicity (Luo et al., 2019). Therefore, in this study we used CN to further increase the mechanical strength of the g-dECM bioink. CN particles were prepared following the methods previously described in the literature (Kumar et al., 2017). The diameters of the prepared particles were observed to be in the 50-100 nm range using TEM (Figure 5A). The final concentration of the prepared CN solution was approximately 20%. The concentration of the CN-g-dECM bioink was set in the range 0-0.5 w/v%, i.e., we compared the behavior of g-dECM without CN, to the behavior of g-dECM mixed with CN up to a concentration of 0.5 w/v%. The stiffness of the 2% g-dECM bioink improved with increases to the concentration of added CN (Figure 5B). To investigate the effect of the CN on cellular behavior, KATO-III was encapsulated in each bioink formulation. Primary thermal crosslinking was subsequently conducted by incubating at 37 • C for 30 min, followed by secondary crosslinking, conducted by treatment with 100 × 10 −3 m CaCl 2 . After culturing for 2 weeks, aggregated cells were observed in all groups through histological analysis ( Figure 5C). The addition of the CN increased the size of aggregates from 2178.7 ± 210.7 µm 2 in the 0% CN-g-dECM group, to 3563.8 ± 583.3 µm 2 , and 5666.5 ± 1440.1 µm 2 in the 0.01% CN-g-dECM, and 0.1% CN-g-dECM groups, respectively. In contrast, the size of cell aggregates in the 0.5% CN-g-dECM group decreased to 2095.1 ± 313.0 µm 2 ( Figure 5D). These results indicate that adjusting the mechanical properties of the bioink using CN supplements can regulate cell aggregation. These observations were further corroborated using the expression of β-catenin and integrin β1, which are sensitive to stiffness (Samuel et al., 2011;Yeh et al., 2017). Up to a concentration of 0.1% CN in g-dECM bioink, where the largest aggregate sizes were observed, the levels of β-catenin and integrin β1 increased with an increase in the matrix stiffness. However, both gene expression and cell aggregate sizes were decreased in the 0.5% CN-g-dECM bioink (Figure 5E), suggesting an improper physical microenvironment for cell proliferation (Cavo et al., 2016). Thus, it can be surmised that an excessively high CN concentration results in an inordinately stiff cell environment that degrades cell properties.
These outcomes indicate that more aggressive cellular functions can be obtained by regulating the stiffness of the bioink using CN. Furthermore, an adequate biophysical environment for gastric cancer cells can be obtained by modulating the concentration of CN.
DISCUSSION
The behavior of gastric cancer cells is regulated by the surrounding environment (Ishimoto et al., 2014). Recognizing the importance of this variable, we developed a 3D cell printed gastric cancer model that uses a gastric specific bioink supplemented with cellulose nanoparticles to provide tissue-specific biochemical and biophysical stimulation of the environment for cancer cells. In our study, we observed that gastric cancer cells in the g-dECM bioink were highly aggregated, in contrast to those in collagen and Matrigel at the same concentration to consider the features of natural ECM. Further, we observed that marker genes related to cancer aggressiveness-MMP2, β-catenin, and integrin β1-were expressed at higher levels in the g-dECM bioink. These results indicated that the g-dECM bioink affects cellular functions, such as matrix remodeling, cell-ECM interaction, and cell-cell interaction, which lead to cancer progression (Kaushik et al., 2019). In addition, because drug resistance is an intrinsic behavior of cancer and plays an important role in developing cancer models (Gottesman, 2002), and organ microenvironment may affect the response to chemotherapy (Khanna and Hunter, 2005), we verified that the therapeutic resistance of gastric cancer cells was increased in the g-dECM bioink. Our findings demonstrate the efficacy of the g-dECM bioink as a drug testing material, in mimicking in vivo conditions that showed high drug resistance. These observations are attributed to the fact that tissue-specific bioinks can provide a tissue-specific environment for cancer cells (Tian et al., 2018) that promote cancer cell progression.
In addition, as already demonstrated in previous studies, native gastric cancer tissues are stiff, and this matrix stiffness regulates the behavior of encapsulated gastric cancer cells (Song et al., 2013;da Cunha et al., 2016). To reconstruct this in vitro, in this study, two methods were chosen to influence the surrounding biophysical environment to regulate the cellular function. In the first method, we increased the ratio of g-dECM to acetic acid in the g-dECM bioink, to enhance its mechanical properties. As well as the enlargement of cell aggregates, we observed that increasing the concentration of g-dECM in the bioink upregulated β-catenin and integrin β1. This implies that the proposed g-dECM bioink can provide a biochemically and biophysically appropriate microenvironment for culturing gastric cancer cells. In the second method, we used CNs, which have superior mechanical strength and excellent biocompatibility (Luo et al., 2019), to enhance the stiffness of the g-dECM bioink. An increase in the modulus of the bioink induced larger cell aggregates and higher expression of β-catenin and integrin β1, which indicates that ECM stiffness of the prepared structure regulates cell-cell interaction and cell-ECM interaction. Moreover, we observed that the 0.1% CN-g-dECM bioink provided the most suitable stiffness for the gastric cancer cells. This indicates that, with its more aggressive characteristic, 0.1% CN-g-dECM can be used to provide a more reliable clinically applicable predictor, compared to previous methods relying on 2D and 3D culture in Matrigel and collagen.
In addition, the g-dECM bioink can be used to fabricate arbitrary 3D structures using automated 3D cell-printing techniques that enable the deposition of various cell-laden bioinks at appropriate positions (Pati et al., 2014). Hence, using the developed bioink, 3D cell-printing techniques could enable the fabrication of more complex gastric cancer systems with different types of cells, such as fibroblasts and endothelial cells, that can provide an alternative to animal models. This is important, as it is increasingly clear that owing to crossspecies differences, animal models do not accurately predict the human body's response to drug testing. With their ability to mimic in vivo environments, and the automated model fabrication process, 3D cell-printed cancer models have become prominent candidates to replace animal models (Kang et al., 2020). Therefore, using our biochemically and biophysically improved bioink, we can fabricate more in vivo relevant gastric cancer systems in future studies.
CONCLUSION
In this study, we developed a CN-g-dECM bioink for 3D cell printing a gastric cancer model. With respect to clinical study, the developed bioink has the advantages of providing a biochemically and biophysically appropriate microenvironment for analyzing gastric cancer cells. Compared to commercially available hydrogels such as Matrigel and collagen, gastric cancer cells in this bioink showed more aggressive characteristics, as confirmed by morphological, drug testing, and genetical analyses. Moreover, the inclusion of CN in the g-dECM bioink allows the regulation of the size of cell aggregates, and the expression of MMP2, β-catenin, and integrin β1, by controlling the stiffness of the cancer microenvironment. Further, cell-laden bioink can be patterned in the appropriate position using 3D cell printing techniques, meaning that it can be applied for fabricating complex gastric cancer systems.
DATA AVAILABILITY STATEMENT
The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found in the article/ Supplementary Material. | 2021-03-17T13:15:31.506Z | 2021-03-17T00:00:00.000 | {
"year": 2021,
"sha1": "b6587309d1c8e17ab1efb6c3cdebd4e5ee4bd752",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3389/fbioe.2021.605819",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b6587309d1c8e17ab1efb6c3cdebd4e5ee4bd752",
"s2fieldsofstudy": [
"Biology",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
} |
259230662 | pes2o/s2orc | v3-fos-license | An Atypical Presentation of Kawasaki Disease and Potential Markers for Diagnosis
Cervical lymphadenopathy is seldom the initial symptom of Kawasaki disease (KD), making diagnosis difficult in early node-first Kawasaki disease (NFKD). Early treatment is important to prevent cardiovascular sequelae. This report discusses a case of a 4-year-old African American female with NFKD and retropharyngeal phlegmon who was initially treated with antibiotics for cervical lymphadenitis. She later developed classic symptoms of KD, including mucositis, conjunctivitis, palmar erythema, and truncal rash. KD was then suspected and treated appropriately, with the patient experiencing rapid clinical improvement. Early misdiagnosis of NFKD is not uncommon, but certain indices, such as patient age, elevated absolute neutrophil count, or elevated liver enzymes, may be helpful in increasing clinical suspicion. NFKD and retropharyngeal phlegmon remain a rare presentation of an already known condition. The case presented here emphasizes the need for KD to be a differential diagnosis in cases of cervical lymphadenitis and retropharyngeal abscess refractory to antibiotic treatment.
Introduction
Kawasaki disease (KD) is a systemic inflammatory illness of unknown etiology that affects children predominantly under 5 years of age. 1 As there are no specific diagnostic tests for KD, the diagnosis is based on the presence of fever and 4 of 5 principal clinical features: bilateral bulbar conjunctival injection, changes of lips and oral cavity, rash, changes of peripheral extremities, and unilateral non-suppurative cervical lymphadenopathy. 1,2 Although diagnostic symptoms may present in any order during the acute phase, the initial presentation of cervical lymphadenopathy along with retropharyngeal phlegmon/abscess and fever is the least common, termed node-first Kawasaki disease (NFKD). [3][4][5] This contributes to the increased risks of delayed diagnosis and treatment. Therefore, there is a need for improved diagnostic tools and awareness of this manifestation of KD.
Hospital Course
A previously healthy, 4-year-old African American female presented with a 4-day history of fever, nasal congestion, rhinorrhea, non-productive cough, and sore throat. Despite daily over-the-counter anti-pyretic administration, her symptoms continued to worsen, including persistent fevers, along with a new onset of left-sided neck swelling associated with drooling and preferential head deviation toward the right side. The patient also experienced fatigue, sore throat, and decreased oral intake, and she denied any recent travels, ear pain, or shortness of breath.
In the emergency department, she was febrile at 38℃ and tachycardic (124 beats/minute). Infectious disease workup was negative for group A Streptococcus pharyngitis, respiratory syncytial virus, influenza A/B, and COVID-19 infections. Her laboratory findings were significant for leukocytosis (white blood cell (WBC) count 19.5 × 10 9 /L), neutrophilia (absolute neutrophil count (ANC) 16.2 × 10 9 /L), elevated C-reactive protein CRP (230.7 mg/L), normocytic anemia (hemoglobin 10.1 g/ dL with MCV 82.8 fl), hypoalbuminemia (3.6 g/dL), elevated liver enzyme (aspartate aminotransferase (AST) 45 U/L), and sterile pyuria (urine WBC 5-10 cells/ hpf), indicating significant systemic inflammation. Neck computed tomography (CT) revealed a retropharyngeal hypodensity extending from the clivus to approximately the C6 vertebral body, along with left palatine tonsillar enlargement and scattered bilateral posterior triangular lymph nodes that was more prominent on the left compared to the right (Figures 1 and 2). Given this clinical picture, she was admitted to the hospital for treatment of left cervical lymphadenitis and retropharyngeal phlegmon with concerns for an evolving abscess.
After 24 hours of intravenous empiric antibiotic therapy, the patient continued to spike multiple fevers, with temperatures peaking at 39.7℃, in addition to persistent, tender left cervical lymphadenopathy and worsening neck mobility. On day 2 of admission, she had a new onset of a non-pruritic, warm, erythematous, desquamating rash in the anterior diaper area extending into the intertriginous region bilaterally ( Figure 3). Later in the day, she developed a raised, circular rash that spread from her anterior neck to the axillary region bilaterally, a sandpaper-like rash on her trunk, and erythema on her palms and soles. Additionally, she had bilateral conjunctivitis with perilimbal sparing as well as dry, cracked lips.
At this time, the patient was diagnosed with Kawasaki Disease, prompting a cardiovascular evaluation. Echocardiogram revealed a trace pericardial effusion without systolic or diastolic dysfunction, while the coronary arteries did not demonstrate ectasis or aneurysms. She was immediately initiated on a daily moderate dose (30 mg/kg/ day) of aspirin divided into 4 doses per day and 2 g/kg infusion of intravenous immune globulin (IVIG) treatment.
Within 12 hours of initiating IVIG therapy, the patient exhibited distinct clinical improvement. She was afebrile and had increased energy, appetite, and range of motion of the neck. Physical exam showed resolving left neck swelling, truncal and diaper rashes, conjunctivitis, and erythema in the palms and soles. Her labs showed downtrending CRP (93.2 mg/L), WBC (18.8 × 10 9 /L), and neutrophil levels (ANC 8.5 × 10 9 /L). She was discharged on low-dose aspirin (81 mg) daily.
During the 2-week follow-up visit, she was back to baseline activity with no evidence of neck swelling or fevers, except for bilateral peeling on her fingers, an expected finding in patients recovering from Kawasaki disease. Echocardiogram showed grossly normal coronary arteries without evidence of pericardial effusion. Finally, at her 6-week follow-up visit, aspirin was discontinued given her normal laboratory (CRP < 5 mg/L; negative complete blood count CBC) and echocardiogram findings.
Discussion
Multiple cases reports have shown that NFKD is often misdiagnosed as bacterial cervical lymphadenitis (BCL) or deep neck infections, as they present similarly with fever, neck swelling, stiffness, tenderness, and dysphagia. [6][7][8][9][10] In addition, the retropharyngeal phlegmon is also a rare presentation of KD. It can be misdiagnosed as a retropharyngeal abscess and has resulted in delayed diagnosis as well as unnecessary needle aspiration and antibiotic therapy. 11 With aspiration, there are absent findings, including scant fluid or negative bacterial cultures, upon needle aspiration of the retropharyngeal space. 4,9,12 Similar to our case, patients are often found unresponsive to antibiotic therapy. Delays in treatment with IVIG and aspirin have resulted in serious cardiovascular sequelae, such as coronary artery aneurysms and thrombosis. 4,13,14 IVIG given within 10 days of fever onset reduces the risk of coronary artery aneurysms from 25% to less than 5%. 15 Both NFKD and bacterial cervical lymphadenitis patients share similar CT findings, which make these diagnoses difficult to differentiate when other classic KD symptoms are not yet present. Kanegaye et al 16 showed that both groups of patients had comparative rates of retropharyngeal edema on CT, but findings of retropharyngeal phlegmon and abscess were much more common in patients with bacterial cervical lymphadenitis. Our patient was also initially diagnosed with retropharyngeal phlegmon based on a retropharyngeal hypodense region found on CT; however, previous studies 11,17 have suggested that this abscess-like finding is attributable to inflammation rather than an infection.
A retrospective analysis by Yanagi et al 18 proposed 4 indices to differentiate between NFKD and bacterial lymphadenitis, including age, neutrophil counts, CRP, and AST. These indices had a sensitivity of 78% and specificity of 100% for identifying KD patients among patients presenting with fever and lymphadenopathy. The average age of NFKD patients was significantly higher than patients with cervical lymphadenitis (6.6 vs 4.8 years). Higher average neutrophil counts (14.4 × 10 9 /L vs 8.134 × 10 9 /L) and CRP (106 mg/L vs 52 mg/L) in NFKD cases most likely reflected the systemic inflammation of KD as compared to other localized causes of cervical lymphadenopathy. Since transaminitis is a relatively common clinical finding in KD, 5 AST was also elevated in NFKD patients compared to lymphadenitis (143 U/L vs 31 U/L). Consistent with this, Kanegaye et al 16 also found that age, CRP, and alanine aminotransferase (ALT) levels were significantly higher among NFKD patients than those with BCL. Applying these indices in the clinical setting has the potential to increase clinical suspicion and earlier diagnosis of NFKD.
In our case, our laboratory findings were relatively consistent with the indices presented by Yanagi et al. 18 While these indices alone may not be sufficient for the diagnosis of NFKD, increased clinical suspicion for NFKD is warranted in patients with cervical lymphadenopathy, fever, and fulfillment of 3 or 4 of these indices, especially if there is a minimal response to appropriate antimicrobial treatment.
Conclusion
This case depicts an uncommon presentation of KD and provides important lessons for clinicians caring for patients with persistent fevers, cervical lymphadenopathy, and suspected retropharyngeal phlegmon unresolved with antibiotic therapy. Diagnosis of NFKD was made 6 days after initial fever onset with the development of other classical symptoms of KD, consisting of rash, bilateral conjunctivitis, erythema of palms and feet, and mucosal changes in our patient. IVIG therapy resulted in resolution of fevers and improvement in clinical symptoms. This further emphasizes the importance of early diagnosis to prevent sequelae. Our case shows that NFKD should be one of the differential diagnoses when patients present with findings of cervical lymphadenitis or radiographic findings of retropharyngeal phlegmon/ abscess along with significantly elevated CRP, liver transaminases, and fever refractory to antibiotic treatment.
Author Contributions
TNL: Contributed to conception and design; contributed to acquisition; drafted manuscript; critically revised manuscript; gave final approval; agrees to be accountable for all aspects of work ensuring integrity and accuracy. ACK: Contributed to conception and design; contributed to acquisition; drafted manuscript; critically revised manuscript; gave final approval; agrees to be accountable for all aspects of work ensuring integrity and accuracy. JYA: Contributed to conception and design; critically revised manuscript; gave final approval; agrees to be accountable for all aspects of work ensuring integrity and accuracy.
Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) received no financial support for the research, authorship, and/or publication of this article.
Ethical Approval and Informed Consent
Written informed consent was obtained from the patient's parent for the publication of this case report. IRB approval was not required per institutional guidelines. | 2023-06-24T05:08:43.318Z | 2023-01-01T00:00:00.000 | {
"year": 2023,
"sha1": "35e213e1f335048dab12d6bad99fed9af4540428",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.1177/2333794x231180420",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "35e213e1f335048dab12d6bad99fed9af4540428",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
117962776 | pes2o/s2orc | v3-fos-license | Airborne Fungi in Indoor Hospital Environments
1 Federal University of Alagoas, Institute of Biological Sciences and Health, Laboratory of Genetic and Applied Microbiology. Maceió AL, Brazil 2 Federal University of Alagoas, Institute of Chemistry and Biotechnology, Laboratory of Biochemistry of Parasitism and Environmental Microbiology. Maceió AL, Brazil 3 Federal University of Alagoas, Institute of Biological Sciences and Health, Laboratory of Cell and Molecular Biology. Maceió AL, Brazil 4 Federal University of Alagoas, Institute of Biological Sciences and Health, Laboratory of Developmental Biology of Drosophila. Maceió AL, Brazil
Introduction
The activities we develop in our daily routine contribute to our long stay indoors. In these places the air that circulates is the result of a cooling process, which provides better wellbeing to those who are present. However, the quality of the air circulating in these the health of workers or residents are thus referred to as "sick buildings", being involved in a condition called the Sick Building Syndrome (SBS).
Sick Building Syndrome (SBS) is a generic term that describes the situation in which the occupants of a given building experience nonspecific symptoms which relate directly to the time in which they develop their activities inside the building (Joshi, 2008;Ghaffarianhoseini et al., 2018). The precondition to classify a building as "sick" relates to the presence of pollutants in the air that circulates, resulting in the occupants being exposed to an environment of poor air quality. The symptoms experienced by the occupants of such sites are usually headaches, disturbances in the eye (irritation, pain, itching, constant tearing or dryness), nasal problems (nasal irritation, nasal cold or runny nose), drowsiness, fatigue, lack of concentration, colds, sore throats, backaches, cold extremities, tension, dry skin, dizziness, muscle aches, weakness, difficulty breathing and wheezing (Joshi, 2008;Schirmer et al., 2011). Indoor air quality in several different types of environments can be influenced by many factors, of which chemical and biological contaminants can significantly harm the health of exposed individuals. Different chemical pollutants with various properties and different concentrations in the environment have been reported as compromising factors to the indoor air quality (WHO, 2010). Some of the major chemical pollutants detected in indoor air are carbon dioxide, inorganic gases, ozone and organic compounds (Luengas et al., 2015). Bessonneau and collaborators evaluated the chemical contamination of indoor air in a school hospital in France, the results showing compounds such as alcohols, esters and ketones as the most prevalent in the air samples analyzed. Additionally, aromatic hydrocarbons, aliphatic hydrocarbons, aldehydes and terpenes were also found in the samples (Bessonneau et al., 2013).
Biological pollutants that are often related to contamination of indoor air include: animal allergens (dust mites and certain proteins), pollen grains, bacteria, endotoxins, fungi and their spores, as well as mycotoxins, which are products of the secondary metabolism of many species of fungi (Luengas et al., 2015;Kim et al., 2017). The presence of biological agents such as bacteria, fungi and viruses in a particular environment and their interference in the quality of the air that circulates in these locations will significantly depend on the factors favoring growth and establishment of these species in the indoor environment. Temperature, moisture and nutrient availability are the major factors commonly related to their establishment (Haleem-Khan and Karuppayil, 2012;Nazaroff, 2013).
With respect to air quality in hospital settings, there are many concerns about the influence and effects that poor-quality air can cause in individuals that are exposed to it, especially patients whose hospital stay should be related to their evolution and recovery. For example, maintaining a well-ventilated space can be of great impact and significance in terms of indoor hospital air quality, since it results in the protection of visitors, health professionals and patients from the most diverse types of pollutants that circulate in the environment via aerial dispersion (Li et al., 2007).
When it comes to patients with immune system impairment, the risk of developing infections caused by spores or fungal fragments present in the air increases substantially. Measures that contribute to the reduction of microbiological contamination of the air in hospital environments represent an extreme value in the prevention of nosocomial infections of the most susceptible individuals (Gangneux et al., 2006), as inhalation of ambient air is one of the main routes of exposure to fungal pathogens (Peláez et al., 2012). Holý and collaborators (2015) carried out a study where they evaluated the microbial contamination of the indoor air collected in a Transplant Unit in a university hospital, as well as verifying the performance of an air filtration system against the observed contamination. The authors verified that the air filtration equipment was efficient to improve the air quality in the analyzed environment, particularly when they found very little evident microbial growth from the samples collected following the use of the filtration device. Since fungi can be considered one of the main contaminating microorganisms in indoor air (Quadros et al., 2009), the application of actions to monitor these microbial contaminants in the air and the role of these measures in the containment of certain diseases caused by these pathogens are noteworthy (Holý et al., 2015).
Fungi in indoor air environments
Indoor air environments represent one of the many places of occurrence and importance with regard to the presence of fungi. Their adaptive characteristics favor their global dispersion, allowing the survival of these microorganisms in several habitat types (Nevalainen et al., 2015;Coombs et al., 2018). Monitoring of airborne fungi has already been recorded in the 19th century (Maddox, 1870), as well as at the beginning of the second half of the 20th century. This initial research on the topic aimed to investigate concentrations of fungi in the outside air (Morrow et al., 1942;Hirst, 1952;Hamilton, 1959). Since the early days, researchers were aware of the relationship between the concentrations of fungal spores in outdoor air and their presence in indoor air, as well as the health risks of exposure to these spores in the air from both outdoor and indoor environments (Richards, 1954). The presence of fungi in hospital air was first demonstrated by Noble and Clayton (1963) and later in a study by Lidwell and Nobe (1975) that demonstrated the performance of airconditioning devices as a source of fungi for ambient air. The presence of fungi as contaminants or biocontaminants that affect indoor air quality has been widely discussed in the literature (Miller, 1992;Gorny et al., 2002;Dubey, 2011;Caillaud et al., 2018).
It is very clear today that most of the fungi present in indoor air come to these sites mainly through external air, which is an important source of biological contamination for indoor air (Lee et al., 2006;Crawford et al., 2015;Abassi and Samaei, 2018). Although external air contributes significantly to the composition and increase in the concentration of fungal spores in indoor air, this is not the only way indoor air contamination can occur. Domestic and everyday activities may have a considerable impact on the concentrations of spores in the air (Lehtonem and Reponem, 1993;Awad et al., 2018). Manipulation of organic material (firewood and potted plants), handling of bedding, dog and cat hair, human skin, hair and nails, as well as clothing and human occupation can positively influence the spores in the indoor air (Reponen et al., 1992;Pitkaranta et al., 2008).
The factors considered preponderant in influencing the presence and development of fungi in environments with indoor air are: humidity, temperature and nutrient availability (TANG et al., 2015). Baughman and Arens (1996) argue that building materials such as wood, cellulose, wallpaper, organic insulation materials, textiles, glues and paints may contain nutrients such as carbohydrates and proteins which serve the fungal growth very well. Still, according to these authors, materials such as concrete, metals, glass fibers, plastics and other synthetic products, although not readily used by fungi, can contain organic remains that serve as a source of nutrients. In general, the temperature and humidity values for indoor fungi growth may vary depending on the species considered. Indoor air environments usually have optimum temperature values for fungi growth, ranging from 10-35 °C (WHO, 2009). For relative humidity, values below 75% are reported as limiting for fungal growth in buildings (Rowan et al., 1999). The critical relative humidity conditions for microbiological growth in building materials were defined by Johansson and collaborators (2005), who established that depending on the group of materials (wood, concrete and others) the fungal growth could occur in values of minimum relative humidity of 75% and maximum of 95%. Polizzi and collaborators (2011) analyzed the metabolic response of some species of fungi found in indoor air under different environmental conditions and found that temperatures of 25 °C and 30 °C, as well as values of relative humidity in the range of 97-100% were considered ideal conditions for the growth of Penicillium spp., Aspergillus spp. and Periconia spp. Fungal growth in building materials was observed at temperatures and relative humidity values of 5 °C to 91% RH, 10 °C in 90-95% RH, 20 °C in 86-90% RH, and 25 °C in 78 -86% RH (Nielsen et al., 2004).
Once present in the environment and compromising air quality, the determination of a limit value of fungal concentration that is not enough to cause health risks to exposed individuals would be extremely relevant. Nowadays, the existence of an international standard guide concerning the maximum acceptable levels and values of fungal bioaerosols related to good air quality in indoor environments has not been established. Rao et al., (1996) reviewed and compared the regulations and quantitative standards for fungi in existing indoor air environments and concluded that a better characterization of fungi sources for these sites as well as more data related to the effects of acute and chronic exposure to these pathogens could be of great assistance in the elaboration of a reliable standard document. According to the American Conference of Governmental Industrial Hygienists (ACGIH), the absence of data related to the exposure and response to concentrations of bioaerosols, as well as the non-availability of a standard collection method in analyses of fungal bioaerosols make it difficult to draw up a common standard (ACGIH 2009). A review containing some recommendations from governmental and private organizations about concentrations of fungi in the air can be found in the review published by Rao et al., (1996). At present, there is an establishment of values of acceptable concentrations for various fungal bioaerosols whose variation depends on the country in question. Table 1 describes some limiting concentrations for indoor fungi in some countries. For hospital environments, the World Health Organization has defined a maximum acceptable value of 50 CFU / m³ for fungi (WHO, 1988).
Fungal bioaerosols in hospital environments
Nosocomial infections represent a major challenge and concern for hospitals, especially due to the high costs that they generate and the consequences for public health (Scott 2009;Singh et al., 2018). It is estimated that 10 to 20% of hospital infections are due to airborne pathogens (Eickhoff, 1994;Dürmaz et al., 2005). In hospitals, the main source of microorganisms for ambient air is infected patients themselves (Hambraeus, 1988;Lemaire et al., 2018). Although this is true for many bacterial and viral pathogens, the main source of fungi in the air in hospital environments, however, is outdoor air (Beggs, 2003;Abassi and Samaei, 2018). The opening of doors, windows and even the inflow and outflow of the hospital may favor increased concentration of fungal spores in the indoor air. Human occupation in the hospital (professionals and visitors), materials brought in by the occupants (personal objects, food and fruit) are also sources of fungi for indoor air (Nuruka et al., 2014). Ventilation systems have also been considered as likely sources of fungal contamination of indoor air (Batterman and Burge, 1995;Ahearn et al., 2004;Sowiak et al., 2018).
Since the conditions of the hospital environment are related to comfort and wellbeing for human coexistence and in these environments there is considerable presence of pathogenic fungi, these same conditions can also act favoring substantially the colonization and establishment of many of these microorganisms. Conditions like heat, moisture, shade, substrate material (e.g. carpets, furniture and concrete), as well as the presence of food can allow the development and permanence of viable fungi in these locations for extended periods of time (Kowalski, 2012). Neely and Orloff (2001) investigated the growth of some fungi of medical importance in samples of tissues and plastic materials, both for hospital use, verifying that Aspergillus and Fusarium grew and remained viable in the analyzed samples, indicating the high capacity of hospital environments to provide suitable conditions for the development of these pathogens. Aspergillus fumigatus, a species commonly found in indoor air environments, has also been found to develop for long periods of time (more than 30 days) in hospital cloth samples (Koca et al., 2012).
Ventilation, heating and air-conditioning systems in hospital environments can provide a microenvironment that is highly conducive to fungus growth and favors the spread of many infectious agents such as fungi in buildings (Ahearn et al., 2004). High relative humidity or condensation of water inside surfaces or ducts, filters and collecting trays in ventilation systems can promote fungal growth, just as the accumulation of dust and dirt may contain compounds that provide important nutrients to the metabolism of these microorganisms (Nevalainen et al., 2015). The material composition of air filters combined with the presence of moisture in them can also act in favor of the development of many fungi (Kemp et al., 2001;Viegas et al., 2018). Fungal growth in air-conditioners have been reported in the literature (Li et al., 2010(Li et al., , 2012Viegas et al., 2018) as well as the capacity of the air filters to retain bacteria and fungi in ventilation systems (Moritz et al., 2001;Aquino et al., 2018). The first research related to fungal contamination of filters used in hospital ventilation systems was made by Arnow et al., (1978). In the results, they could verify the growth of A. fumigatus in the analyzed filters as well as relate the observed growth to some cases of aspergillosis. Later, outbreaks of aspergillosis were associated with contamination of hospital air filters, as well as dust and carpet samples in the same environment (Arnow et al., 1991). Simmons et al., (1997) analyzed several types of air filters in seven hospitals in order to investigate their colonization by fungi. Species of the genera Aspergillus, Acremonium, Alternaria, Cladosporium and Penicillium, which are fungi frequently present in indoor environments, were also found in the samples analyzed.
Several species of filamentous fungi can be found in aerial contamination investigations in hospital settings. Hong et al., (1999) analyzed air samples from 83 hospital sites and verified that species of the genera Cladosporium, Penicillium, Aspergillus, and Alternaria were the most frequent isolated fungi. Research on fungi concentrations in the indoor air of a Pediatric Hospital Unit revealed that Cladosporium, Alternaria, Penicillium, Aspergillus and Acremonium were the most prevalent genera (Okten and Asan 2012). Quadros and collaborators (2009), when investigating aerial microbiological contamination of a neonatal ICU, an adult ICU and two operating rooms, found that Aspergillus and Penicillum were the most common fungal genera, although Cladosporium and Acremonium were also present at a lower frequency. Qudiesat and collaborators (2009) evaluated the quality of air as well as the amount of aerial microorganisms in two hospitals. They found that in both hospitals Aspergillus spp., Penicillium spp., Rhizopus spp. and Alternaria spp. were the air contaminant fungal species. The authors also verified that the concentration of fungi and bacteria in the air of both environments were influenced by human occupation (Qudiesat et al., 2009).
Major fungi in hospital interior environments and their chemical allergens
Although many species of fungi are found in studies on air quality analysis in hospital settings, only Aspergillus, Penicillium, Cladosporium, Alternaria and Fusarium will be highlighted. In addition, the allergens produced by these fungi, whose presence in the environment can immensely affect the health of the exposed individuals, will also be briefly addressed as they are responsible for worsening the health status of immuneimpaired patientes.
Genus Aspergillus
Species of the genus Aspergillus can be found in several environments, including indoor hospital environments (Kousha et al., 2011;Asif et al., 2018). Only few species are considered human pathogens (Paulussen et al., 2016), among which we can highlight Aspergillus fumigatus, Aspergillus flavus, Aspergillus nidulans, Aspergillus terreus, and Aspergillus niger (Hedayati et al., 2007;Kwon-chung and Sugui 2013;Hachem et al., 2014;Vermeulen et al., 2014;Veraldi et al., 2016). Barrs and collaborators (2013) also reported Aspergillus felis as a causative agent of invasive aspergillosis in humans. A. fumigatus is undoubtedly the species with the greatest impact in terms of infection of individuals with impaired immune system, as well as in the etiology of diseases caused by fungi that are transported by air (Morenogonzález et al., 2016).
Air exposure to fungal material can significantly affect the health of individuals in a particular environment, especially in hospitals, where conditions favor nosocomial infections. Clinical manifestations caused by Aspergillus can be categorized into allergic reactions, chronic pulmonary aspergillosis and invasive aspergillosis (Paulussen et al., 2016).
The presence of Aspergillus in hospital environments has been well described in the literature. Holý et al., (2015) reported microbiological contamination of air by Aspergillus sydowii, Aspergillus versicolor and A. terreus in a Transplant Unit of a university hospital. Several sites of a Hospital in India were used for air collection in order to carry out a monitoring study of contamination by filamentous fungi. Out of all isolated fungi, the genus Aspergillus was the most frequent, and the species A. niger and A. flavus stood out as the main air contaminants in the analyzed sites (Kushawaha et al., 2015). Interestingly, both species are related to aspergillosis and superficial infections in humans. In their study, Martins-Diniz and collaborators (2005) showed the presence of Aspergillus in the samples analyzed, which may compromise the recovery of patients, especially those immunologically impaired. The occurrence of A. fumigatus and A. versicolor isolated from poorly maintained air-conditioners located in operating rooms was reported by Gniadek and Macura (2011).
The presence of Aspergillus in hospital environments raises great concern, especially for its ability to produce allergens that are released into the air, which can affect the health of sensitized individuals. The increase in the concentration of Aspergillus spp. indoors will cause the exacerbation of asthma, for example (Zubairi et al., 2014). A. flavus, A. fumigatus, A. niger, A. oryzae and A. versicolor are the species that produce allergens, according to the WHO/IUIS Allergen Nomenclature Sub-committee (www.allergen.org), the allergen website approved by the World Health Organization ( Table 2). Out of all, A. fumigatus is the one presenting the highest amount of allergens described, while for A. flavus and A. versicolor there is only one allergen description for each species. Allergens from A. flavus and A. niger were identified, and the allergic response to them was confirmed by allergy skin test and serum IgE test (Verman et al., 2015). In tests using serum from asthmatic patients, a 34-kD alkaline serine protease was identified as A. oryzae allergen (Shen et al., 1998). Allergens of A. fumigatus Asp f 18 and Asp f 34 have been identified in serum of asthmatic patients and with allergic bronchopulmonary aspergillosis (ABPA), respectively (Shen et al., 2001;Glaser et al., 2009). A review covering various aspects of allergens produced by A. fumigatus can be seen in the study published by Kurup (2005).
Genus Penicillium
The genus Penicillium has species whose occurrence has been verified in a wide variety of habitats, especially with great prevalence in indoor air environments (Visagie et al., 2014). In hospital settings, its presence as a contaminant of indoor air has been constantly verified. Sepahvand et al., (2013) found Penicillium as the most prominent genus of isolated fungi present in the microflora of indoor air in five hospitals. Similarly, a study conducted in order to evaluate the presence of fungi in indoor air of a hospital Oncology Unit also pointed this genus as the most frequent in air samples (Okten et al., 2015). In critical public and private hospitals areas, analysis of samples collected in filters and airconditioning palettes showed contamination with Penicillium sp. (Santana and Fortune 2012), which in this type of environment, due to the precarious health condition of many patients, may represent a risk to their recovery.
Penicillium mainly affects immunosuppressed individuals, being this impaired immune system a result from a primary infection with human immunodeficiency virus (HIV) or caused by some sort of treatment (Barcus et al., 2005). The first report of human infection by this fungus was observed in 1973, when Penicillium marneffei isolation from the spleen of a patient with Hodgkin's disease was described (Disalvo et al., 1973). In fact, P. marneffei has been the most frequent species reported in human infections (Vanittanakom et al., 2006;Yu et al., 2018), although rare infections by other species have also been reported (Geltner et al., 2013;Oshikata et al., 2013;Radulesco et al., 2018).
The importance of Penicillium as a fungus that produces allergens and its presence in indoor air environments has currently received great attention. A review published by Sharpe and collaborators (2015) highlights the relationship between the exposure to fungal allergen from some species in indoor air environment and asthma cases. For example, the severity of asthma was associated with Penicillium exposure (Pongracic et al., 2010). The official website of Allergens highlights Penicillium brevicompactum, Penicillium chrysogenum, Penicillium citrinum, Penicillium crustosum and Penicillium oxalicum as allergenic species. A 68 kDa allergen, produced by P. chrysogenum (previously named Penicillium notatum), was characterized by molecular biology and identified as N-acetyl-glucosaminidase (Shen et al., 1995). Allergens produced by P. oxalicum and P. chrysogenum were immunologically reactive with serum from asthmatic individuals, as well as showing homology with allergen produced by P. citrinum and a vacuolar serine protease from A. fumigatus (Shen et al., 1999). Molecular cloning of genes encoding allergens in P. brevicompactum revealed the presence of an allergenic clone named Pen b 26 which was reactive with serum from sensitive individuals, the allergen being identified as a ribosomal protein (Sevinc et al., 2005). Isolation and characterization of an allergen (Pen cr 26) produced by P. crustosum was reported by Sevin et al., (2014). Pen cr 26 presented strong sequence homology with Pen b 26; however, the authors described the existence of antigenic differences among IgE epitopes, which led them to consider Pen cr 26 as a hypoallergenic variant of Pen b 26. Table 3 shows the allergens recognized and approved by WHO/IUIS Allergen Nomenclature.
Genus Cladosporium
Cladosporium involves fungi with a wide worldwide distribution and which are also isolated from materials commonly found in environments such as paints, wood, textiles and other organic compounds (Andersen et al., 2000;. Conidia of species of this genus are commonly isolated from the air (Pavan and Manjunath 2014;Weryszko-Chmielewska et al., 2018). Although the highest prevalence of Cladosporium is verified in atmospheric air (Zoppas et al., 2011), its isolation in indoor air environments has also been verified (Nambu et al., 2009). The occurrence of this fungus as one of the most prevalent in the air of hospital environments has been verified in the literature. By analyzing the presence of contaminating fungi in the air-conditioning devices in intensive care units and operating rooms, the authors found Cladosporium as one of the most prevalent genera (ABOUL-NASR et al., 2014). These results emphasize the contribution of air-conditioning equipment to the contamination of air in hospital units, which significantly compromises patient recovery. Although Cladosporium species are rare as human pathogens, they are involved in cutaneous infections, phaeohyphomycosis and pulmonary infections (Viheira and Pacheco 2001; Tasic and Tasic 2007;Castro et al., 2013), some of them presenting significant aggravation, which can represent an enormous risk to hospitalized patients. Additional studies have also observed the presence of Cladosporium as one of the main air pollutants in hospital environments (Lobato et al., 2009;Maldonado-Vega et al., 2014;Chaivisit et al., 2018).
The presence of Cladosporium in environments with indoor air is considerably influenced by atmospheric air. Spores of this fungus have allergens that can affect the health of sensitive individuals and have been associated with exacerbation of asthma in children (Raphoz et al., 2010). According to WHO/IUIS Allergen Nomenclature, Cladosporium cladosporioides and Cladosporium herbarum are the species whose allergens have been identified so far. Out of these species, C. herbarum receives greater attention due to the greater number of allergens it produces (Table 4). Many of the allergens produced by Cladosporium show cross-reactivity with allergens of other fungi, especially with species of the genus Alternaria (Achatz et al., 1995). Chou et al., (2008) reported the identification of a serine protease as the main allergen of C. cladosporioides. The authors also observed the reactivity of this allergen with serum of asthmatic patients, as well as the cross-reactivity of this allergen with Aspergillus ssp. allergens and Penicillium ssp.
NADP-dependent mannitol dehydrogenase was recognized by IgE antibodies in 57% of C. herbarum-sensitive patients, having been considered the main allergen of this species (Simon-Nobbe et al., 2006). Monosensitization to Cladosporium allergens has rarely occurred, which can be attributed largely to cross-reactivity with allergens of other species.
Genus Alternaria
Alternaria is one among the most prevalent fungi in the atmospheric air, along with Aspergillus and Cladosporium. Although the highest concentrations of their spores are observed in the atmosphere, their presence in indoor air environments has also been reported (Sharma et al., 2011;Fang et al., 2013). Alternaria alternata has been described as one of the most prevalent fungal species in indoor air environments in the United States (Woudenberg et al., 2015). In the hospital environment, the presence of Alternaria in air samples collected in a neonatal unit has been mentioned (Sakartepe et al., 2016). The occurrence of Alternaria ssp. was also described in an Intensive Care Unit and apartments of a hospital unit in the city of Francisco Beltrão (Flores and Onofre 2010). A study by Godini et al., (2015) found a high level of fungal bioaerosols in hospital air samples, with Alternaria ssp. being among the most prevalent fungi.
Despite being considered essentially a phytopathogen, species of this genus have been related to cases of infections in humans. Most of these infections are opportunistic, especially affecting individuals with impaired immune function (Moreno et al., 2012). The clinical manifestations involve mainly cutaneous and subcutaneous infections, although other types of infections may also occur (Pastor and Guarro 2008). Patients who underwent transplantation may be susceptible to Alternaria infections, including those caused by more than one species (Brás et al., 2015). Recently, A. alternata has been related to cases of cutaneous and visceral phaeohyphomycosis (Gomes et al., 2011;Raza et al., 2015). The major importance of Alternaria is undoubtedly related to its position as one of the main genera of allergenic fungi, with a singular prominence for the A. alternata species. Association of Alternaria sp. with asthma, hypersensitive pneumonitis and allergic rhinitis (Pastor and Guarro, 2008), as well as the involvement in cases of respiratory arrest (O'hollaren et al., 1991) has been verified in the literature. Pulimood et al., (2007) found that the exposure of individuals susceptible to Alternaria may influence the increase in symptoms related to asthma. Exposure to spores of these fungi has also been associated with increased risks of hospitalizations for asthmatic children and adolescents (Tham et al., 2016).
A. alternata is the only species of the genus that has allergens identified and approved by the WHO/IUIS Allergen Nomenclature Sub-Committee (Table 5). Out of all the identified allergens, Alt a 1 is the most important, having high reactivity with sera from sensitive individuals as well as a high allergenicity. Twaroch et al., (2012) have speculated that Alt a 1 is located in the cell wall of Alternaria spores, and in this way may contribute to the upper airway related symptoms of sensitive individuals. Recently, some published reviews have specifically addressed the allergens produced by A. alternata, as well as the role of each one in the development of respiratory allergies caused by fungi (Kustrzeba-Wójcicka et al., 2014;Gabriel et al., 2016).
Genus Fusarium
Species of the genus Fusarium are widely distributed and recognized as an important plant pathogen. Many species are mycotoxin producers, which are toxic secondary metabolites whose toxicity can affect human health (Antonissen et al., 2014). Among the many effects, exposure to these mycotoxins may affect the intestinal epithelium (Liew and Mohd-Redzwan, 2018), as well as lead to an effect on the immune system (Maresca 2013). Fusarium has been linked to a broad spectrum of infections in individuals with immune problems. In these individuals the clinical manifestations related to fusariosis involve endophthalmitis, sinusitis, pneumonia, skin problems, as well as fungemia (Nucci and Anaissie 2007). Immunosuppressed lung transplant patients have often been affected by fusariosis (Carneiro et al., 2011).
The presence of Fusarium as an indoor air contaminant has been mentioned in the literature (Hsu et al., 2012;Ziehe et al., 2014). Likewise, contamination of the air in hospital environments by this fungus can also occur, significantly compromising patient recovery. Pantoja et al., (2012), when analyzing the fungal biodiversity in the air of hospitals in the city of Fortaleza/Brazil, verified the presence of Fusarium spp. in all hospitals and in several of the sampled environments. Other studies on the biological contamination of air in hospitals have also shown the presence of this fungus (Awosika et al., 2012;Emuren and Ordinioha 2016). The WHO/IUIS Allergen Nomenclature Sub-Committee identified and approved four Fusarium allergens (Table 6). The association of asthma with allergens of this fungus in sensitive individuals has been reported in the literature (Khosravi et al., 2012). Hoff et al., (2003) characterized and identified an allergen produced by Fusarium culmorum, which was reactive in 44% of sera from susceptible individuals. A transaldolase was identified as an allergen of Fusarium proliferatum, and this allergen was further verified to have cross-reactivity with the allergen (transaldolase) produced by Cladosporium as well as with human transaldolases (Chou et al., 2014).
In conclusion, artificially air-conditioned environments represent one of many habitats where development and establishment of many fungal species can occur. Humidity, temperature, as well as nutrient availability and neglected maintenance of refrigeration systems are fundamental requirements for mold growth in indoor air-conditioned environments. In hospital settings, the presence of anemophilous fungi circulating throughout the hospital poses a great risk to all those present and this should be considered. Health professionals, visitors and patients, especially those patients who are immunologically compromised, can be significantly affected when exposed to fungal contaminants in the air. Many of the anemophilous fungi commonly found in hospital air quality analyses are pathogenic and represent an even greater challenge because of the ability to produce allergens. Since many of the allergens produced are related to various respiratory illnesses in children and adults, maintaining a hospital environment free from contamination and with good air quality becomes a major challenge. In this context, a better understanding of all risks associated with exposure to fungi in indoor environments may lead to measures that contribute to minimizing the implications for human health, especially in maintaining a safer location. | 2019-06-13T13:21:02.170Z | 2019-01-20T00:00:00.000 | {
"year": 2019,
"sha1": "5a1fd51bb6dcdc5bc7b67053be00c3c0072cdcba",
"oa_license": null,
"oa_url": "https://www.ijcmas.com/8-1-2019/Jean%20Phellipe%20Marques%20do%20Nascimento,%20et%20al.pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "e2184693b0046573bf9ebf6b29ccbd45fbfbb5ec",
"s2fieldsofstudy": [
"Biology",
"Environmental Science",
"Materials Science"
],
"extfieldsofstudy": [
"Environmental Science"
]
} |
67822149 | pes2o/s2orc | v3-fos-license | Quantum Mind from a Classical Field Theory of the Brain
We suggest that, with regard to a theory of quantum mind, brain processes can be described by a classical, dissipative, non-abelian gauge theory. In fact, such a theory has a hidden quantum nature due to its non-abelian character, which is revealed through dissipation, when the theory reduces to a quantum vacuum, where temperatures are of the order of absolute zero, and coherence of quantum states is preserved. We consider in particular the case of pure SU(2) gauge theory with a special anzatz for the gauge field, which breaks Lorentz invariance. In the ansatz, a contraction mapping plays the role of dissipation. In the limit of maximal dissipation, which corresponds to the attractive fixed point of the contraction mapping, the gauge fields reduce, up to constant factors, to the Pauli quantum gates for one-qubit states. Then tubuline-qubits can be processed in the quantum vacuum of the classical field theory of the brain, where decoherence is avoided due to the extremely low temperature. Finally, we interpret the classical SU(2) dissipative gauge theory as the quantum metalanguage (relative to the quantum logic of qubits), which holds the non-algorithmic aspect of the mind.
Introduction
Hameroff and Penrose suggested, in their orch. OR model [1] [2] of the Quantum Mind, that tubulines in microtubules can be in superposed states, like qubits, leading to quantum computation in the brain. Influential criticism of the possibility that quantum states can in fact survive long enough in the thermal environment of the brain has been raised by Tegmark [3]. He estimates the decoherence time of tubulin superpositions due to interactions in the brain to be less than 10 -12 sec. Compared to typical time scales of microtubule processes of the order of milliseconds and more, he concludes that the lifetime of tubulin superpositions is much too short to be significant for neurophysiological processes in the microtubule. In a response to this criticism, Hagan et al. [4] have shown that a revised version of Tegmark's model provides decoherence times up to 10 to 100 µ sec, and it has been argued that this can be extended up to the neurophysiologically relevant range of 10 to 100 msec under particular assumptions of the scenario by Penrose and Hameroff. In this paper, we suggest that tubulines-qubits can be processed in the quantum vacuum (where temperatures are of the order of absolute zero, and coherence is maintained) of a classical dissipative non-abelian gauge theory of the brain. In a very recent paper [5] we considered the particular case of a classical SU(2) Yang-Mills theory. Such a theory has an hidden quantum nature, due to its non-abelian character. In fact, it exhibits a quantum vacuum if dissipation is taken into account. In [5], the role of dissipation was played by a contraction mapping in a particular ansatz for the gauge field, which breaks Lorentz invariance. In a limit of the ansatz corresponding to the attractor, the theory falls in a quantum vacuum. There, the gauge field components reduce to quantum logic gates of one-qubit states. The idea of describing brain processes in terms of a field theory goes back to the1960s, when Ricciardi and Umezawa [6] suggested to utilize the formalism of quantum field theory to describe brain states, with particular emphasis on memory (the "Quantum Brain Dynamics" paradigm). In Quantum Brain Dynamics, the field theory is quantum from the start, while in this paper we consider a classical field theory, and look for its hidden quantum features. The proposal of Ricciardi and Umezawa has gone through several refinements, for example by Stuart and Major [7] and by Jibu and Yasue [8]. Some more recent progress has been achieved by Vitiello [9] by including dissipation. However, making dissipation to agree with quantization is an hard task, due to the appearance of non-Hermitian operators, and in fact Vitiello's dissipative quantum field theory encountered some technical difficulties. More precisely, the concrete building of a Dissipative Quantum Field Theory requires a generalization of the usual Quantum Field Theory. Namely the latter is based on assemblies of harmonic oscillators, which, in the case of dissipative processes, should be replaced by damped oscillators. Unfortunately the latter do not fulfil the energy conservation principle, and this fact makes unreliable any attempt to introducing an Hamiltonian-like formalism. A convenient strategy was introduced in [10] [11]where it was described the influence of a dissipating environment by doubling the original damped system through the introduction of a time-reversed version of it, which acts as an absorber of the energy dissipated by the original system. More recently, Vitiello and collaborators [12] presented an example of dissipation in a classical system which explicitly leads, under suitable conditions, to a quantum behaviour. They showed that the dissipation term in the Hamiltonian for a couple of classical damped-amplified oscillators manifests itself as a geometric phase and is actually responsible for the appearance of the zero point energy in the quantum spectrum of the 1D linear harmonic oscillator. It seems that the our and their lines of thought have some point in common and are fundamentally in agreement. Some of the assumptions in [12] were inspired by 't Hooft work [13] [14], where he discussed classical, deterministic, dissipative models and showed that constraints imposed on the solutions which introduce information loss resembles a quantum structure. 't Hooft's conjecture is that the dissipation of information which would occur at Planck scale in a regime of completely deterministic dynamics would play a role in the quantum mechanical nature of our world. Penrose's idea of the non algorithmic nature of mathematical intuition [15] [16] is another important feature of his vision of the Quantum Mind. Here we support this idea, although we use quantum metalanguage [17] instead of the first G¨odel's incompleteness theorem. However, the two approaches are related to each other, once one takes into account the quantum version [18] of G¨odel's theorem, in the logic of quantum information, which derives from a quantum metalanguage. The paper is organized as follows: In Sect. 2, we present the physical model, that is, the classical SU(2) gauge theory, the ansatz, which breaks Lorentz invariance, and the contraction mapping playing the role of dissipation. In Sect. 3, we show that qubits can be processed in the quantum vacuum due to the fact that there the gauge field components reduce to quantum logic gates for one-qubit states. Due to the very low temperature of the quantum vacuum, tubuline-qubits do not decohere. In Sect. 4, we interpret the classical dissipative non-abelian gauge theory of the brain as the quantum metalanguage, from which originates the quantum object language of the unconscious, and argue that quantum metalanguage represents the non-algorithmic aspect of the mind.
The Physical Model
There is an ansatz [5] for the classical SU(2) gauge field, which, in a particular limit corresponding to a vacuum solution, enables one to recover spin ½ quantum mechanics. This ansatz is gauge invariant, but breaks Lorentz invariance. Of course the nature of the new vacuum state must be intrinsically quantum. At this point one might ask which is the physical mechanism that can trigger this process, which leads to a quantum vacuum state of the original classical theory. The most plausible answer is dissipation. A dissipative system is characterized by the spontaneous appearance of symmetry breaking, which in our case is the breaking of Lorentz symmetry. This vacuum is quantum as all the thermal fluctuations have disappeared because of dissipation, and quantum fluctuations dominate. In the limit, the gauge field reduces to the generator of a global U(1), i.e., a phase, times a Pauli matrix, that is, a quantum logic gate of one-qubit state. This suggests that qubits can be processed in a quantum vacuum of the classical SU(2) gauge theory. Given the quantum vacuum is at zero absolute temperature, 0 = T , the qubits do not decohere, unless they are put in interaction with an external environment. In [5] we did not describe dissipation by any particular model, however the role of dissipation was played by a contraction mapping in the ansatz. The contraction mapping is related to some geometrical aspects of the gauge theory under consideration.
The Ansatz
In [5], we considered the SU(2) gauge field and made the following ansatz: is a U(1) gauge field and the a σ are the Pauli matrices, which satisfy the commutation relations: 2) The ansatz (2.1) explicitly breaks Lorentz invariance. In the following we will consider, in particular, the limit case: . In a sense, the SU(2) gauge theory reduces to the quantum mechanics of spin ½. Let us consider the SU(2) gauge transformations performed on the original gauge field µ A : where g is the gauge coupling constant, U is given by: and ) (x a ρ are three arbitrary real functions. The ansatz (2.7) transforms under (2.5) as:
The contraction mapping as dissipation
The pure SU(2) gauge theory under consideration can be described in terms of a principal fiber bundle ( ) , where P is the total space, B is the base space (in our case 4 R ), G (in our case SU(2)) is the structure group, which is homeomorphic to the fiber space F, and π is the canonical projection: 4 : R P → π (2.12) (For a review on principal fiber bundles see, for instance Ref. [19]). The base space 4 R is equipped with the Euclidean metric d: (2.13) where x and ' x are two points of 4 R and must be intended as The open ball of rational radius n r , centred at * x is: x for large values of n: (2.17) The fixed point * x is an attractive fixed point for ) (x λ , as it holds: The point * x is then a particular kind of attractor for the dynamical system described by this theory. Furthermore, it holds: , which is equivalent to say that ) (x λ is a contraction mapping in the attraction basin of * x , that is, it satisfies the Lipschitz condition [20]:
Qubits processed in the quantum vacuum
The qubit is the unit of quantum information. It is the quantum analog of the classical bit { } 1 , 0 , with the difference that the qubit can be also in a linear superposition of 0 and 1 at the same time. (For a review on quantum information see, for instance, Ref. [21]). The qubit is a unit vector in the 2-dimensional complex Hilbert space 2 C . The expression of the qubit is: 1 Where the symbol is the ket vector in the (bra-ket) Dirac notation in the Hilbert space.
The two kets: form the orthonormal basis of the Hilbert space 2 C , called the computational basis.
The coefficients β α, are complex numbers called probability amplitudes, with the constraint: to make probabilities sum up to one. (Any quantum measurement of the qubit, either gives 0 with probability 2 α , or 1 with probability 2 β ).
The geometrical representation of the qubit corresponds to the Bloch sphere, which is the sphere 2 S with unit radius. Formally, the qubit, which is a point of a two-dimensional vector space with complex coefficients, would have four degrees of freedom, but the constraint (3.3) and the impo ssibility to observe the phase factor reduce the number of degrees of freedom to two. Then, a qubit can be represented as a point on the surface of a sphere with unit radius.
The Bloch sphere is defined by: Any generic 1-qubit state in (3.1) can be rewritten as: where the Euler angles ϑ and φ define a point on the unit sphere 2 S . Thus, any 1-qubit state can be visualized as a point on the Bloch sphere, the two basis states being the poles. We remind that any transformation on a qubit during a computational process is a reversible operation, as it is performed by a unitary operator U: U is the Hermitian conjugate of U . This can be seen geometrically as follows. Any unitary 2 2 × matrix U on the 2-dimensional complex Hilbert space 2 C (which is an element of the group SU (2)) multiplied by a global phase factor): (where * α is the complex conjugate of α ), can be rewritten in terms of a rotation of the Bloch sphere: is the rotation matrix of the Bloch sphere by an angle θ about an axis n .
In [5] we showed that the SU(2) gauge fields a A µ reduce to the operators a A which, up to a multiplicative constant, are the product of the generator of a global U(1) group times the Pauli matrices: (3.9) This means that the pure SU(2) gauge field theory is reduced to a quantum mechanical theory of spin ½ with a constant U(1) "charge", in absence of any interaction. The operators a A in (3.9) are unitary operators, as it holds: (3.10) Then, the a A operators can play the role of quantum logic gates for one-qubit states. In fact, the X, Y, Z quantum logic gates for one-qubit are just the three Pauli matrices: And the operators a A in (3.9) can be rewritten as: (3.12) that is, the a A operators are, up to a constant factor and a phase factor, just one-qubit quantum logic gates. It should be noticed that the a A operators are not Hermitian. This feature is a residual of the dissipative character of the original field theory. Then, qubits can be processed in the quantum vacuum state of a classical dissipative non-abelian gauge theory, and that decoherence is avoided due to absolute zero temperature of the quantum vacuum. This means that tubulines-qubits of the Penrose-Hameroff model of the quantum mind can take place in this physical model, and moreover they are protected against decoherence.
Quantum Metalanguage: The non-algorithmic aspect of the mind
The arguments discussed in the previous sections suggest that the non-algorithmic aspect of the mind is hold by a classical, dissipative, non-abelian field theory of the brain. In fact such a theory is not computable, neither classically, nor quantum. The quantum computational aspect is hold by the quantum mechanical vacuum of that theory (in the Hameroff-Penrose model the quantum computational mode should describe the unconscious). The classical-computational mode is obtained after decoherence of superposed quantum states, through interaction with the external world. This mode should correspond to consciousness. In logical terms, as it was shown in [17], the classical field theory of the brain with hidden quantum nature is a quantum metalanguage (QML), while the quantum mechanics of qubits is the Quantum Object Language (QOL). QML is made of assertions, linked in a metalinguistic way. The difference with a classical metalanguage is that in QML atomic assertions carry assertion degrees, which are complex numbers, interpreted as probability amplitudes. Also, the QML is equipped with Meta Data, corresponding to the constraint that probabilities sum up to one. The reflection principle of basic logic [22] was used to recover QOL from QML. By the reflection principle, all the logical connectives are introduced by solving an equation (called definitional equation), which "reflects" meta-linguistic links between assertions into logical connectives between propositions. The QOL derived from the QML through the reflection principle, is made of propositions linked by quantum connectives, like, for instance, the connective "quantum superposition" (the quantum analogous of the classical connective "AND") which is labelled by complex numbers, and is noncommutative. In the limit of maximal dissipation, when the gauge fields reduce to unitary operators, which process quantum information, we are at the very level of the reflection principle: QML is processing QOL whose elements, propositions, are interpreted as quantum states. It should be noticed that a quantum computer (QC) has a QOL, and its physical theory is QM. Therefore, a QC cannot reach a QML (a non-algorithmic mode of thought) because it is impossible to go from the finite number of degrees of freedom of QM to the infinite ones of FT. That is, a quantum computer will never be able to reach a non-algorithmic mode of thought. This is the difference between a quantum mind and a quantum computer. In summary, we suggested that the mind has three modes: the non-computational mode (QML), the quantum-computational mode (QOL) describing the Quantum Mind (or unconscious), and the classical-computational one, describing the Classical Mind (or consciousness). The physical description of the first mode is a classical Field Theory , the second one is Quantum Information, the third one is Classical Information.
Aknowledgements
I am very grateful to E. Pessa and G. Vitiello for useful discussions. | 2018-12-22T11:22:24.813Z | 2011-04-13T00:00:00.000 | {
"year": 2011,
"sha1": "e02834c55d4e5d3e10b1e1ca877635f3e77787a2",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "94c41e72cea81c2c3414677c0a7bbfae7fb32ae1",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
18106627 | pes2o/s2orc | v3-fos-license | Effect of Lifestyle on Asthma Control in Japanese Patients: Importance of Periodical Exercise and Raw Vegetable Diet
Background The avoidance of inhaled allergens or tobacco smoke has been known to have favorable effects on asthma control. However, it remains unclear whether other lifestyle-related factors are also related to asthma control. Therefore, a comprehensive study to examine the associations between various lifestyle factors and asthma control was conducted in Japanese asthmatic patients. Methods The study subjects included 437 stable asthmatic patients recruited from our outpatient clinic over a one-year period. A written, informed consent was obtained from each participant. Asthma control was assessed using the asthma control test (ACT), and a structured questionnaire was administered to obtain information regarding lifestyle factors, including tobacco smoking, alcohol drinking, physical exercise, and diet. Both bivariate and multivariate analyses were conducted. Results The proportions of total control (ACT = 25), well controlled (ACT = 20-24), and poorly controlled (ACT < 20) were 27.5%, 48.1%, and 24.5%, respectively. The proportions of patients in the asthma treatment steps as measured by Global Initiative for Asthma 2007 in step 1, step 2, step 3, step 4, and step 5 were 5.5%, 17.4%, 7.6%, 60.2%, and 9.4%, respectively. Body mass index, direct tobacco smoking status and alcohol drinking were not associated with asthma control. On the other hand, younger age (< 65 years old), passive smoking, periodical exercise (> 3 metabolic equivalents-h/week), and raw vegetable intake (> 5 units/week) were significantly associated with good asthma control by bivariate analysis. Younger age, periodical exercise, and raw vegetable intake were significantly associated with good asthma control by multiple linear regression analysis. Conclusions Periodical exercise and raw vegetable intake are associated with good asthma control in Japanese patients.
Introduction
Bronchial asthma attacks are often observed in several situations, including allergen inhalation, smoking, alcohol drinking, exercise, and the use of non-steroidal antiinflammatory drugs. To date, many investigators have reported relationships between several lifestyle factors and asthma incidence [1][2][3][4][5][6][7]. Increasing body mass index (BMI), passive smoking, and low income are risk factors for asthma incidence [1][2][3][4]. Daily intake of fresh fruit or vegetables in infancy decreases the risk of asthma occurrence [5]. Previous increased intakes of saturated fatty acids, myristic and palmitic acids, and butter appear to be related to the risk of current asthma in children [6]. More frequent consumption of fruit, vegetables, and fish was associated with a lower lifetime prevalence of asthma, whereas higher burger consumption was associated with higher lifetime asthma prevalence [8].
On the other hand, other studies have reported that there were no clear relationships between dietary patterns and asthma incidence [9,10]. These previous reports focused on the relationship between asthma incidence and past lifestyle factors including diet, but there have been few reports concerning the relationship between asthma control and daily lifestyle [11]. Moreover, inconsistent findings have been observed in the existing studies looking at factors associated with asthma control. The avoidance of inhaled allergens or tobacco smoke has been known to have favorable effects on asthma control. However, González Barcala and colleagues reported that alcohol drinking did not affect asthma control [12]. Similarly, Westermann and colleagues also did not find the relationship between asthma control and periodic exercise [13]. Moreover, it remains unclear whether other lifestyle-related factors are also related to asthma control. Therefore, a comprehensive study was conducted to examine the associations between various lifestyle factors and asthma control in Japanese asthmatic patients.
Ethics Statement
This study was approved by the Institutional Review Board of the National Center for Global Health and Medicine and a written informed consent was obtained from each participant. This study was conducted according to the principles expressed in the Declaration of Helsinki.
Study Design
The study subjects included 437 stable asthmatic patients recruited from the outpatient clinic of the National Center for Global Health and Medicine, Tokyo, Japan in 2009-2010. Eligible patients were aged over 20 years and had a clinical diagnosis of asthma supported by one or more other characteristics: variability in peak expiratory flow of more than 20%; the airway reversibility by inhaled β2 agonist; hyperresponsiveness of methacholine challenge; recurrent dyspnea episode with wheezing. We excluded patients who could not fill in the questionnaire, or who did not visit the clinic regularly, or who was diagnosed as asthma within 3 months of the study entry.
Asthma control for the last four weeks was assessed using the asthma control test (ACT). A structured questionnaire was administered to obtain information regarding lifestyle factors, including tobacco smoking, alcohol drinking, physical exercise, dietary intakes, pets, living space, cleaning habits, occupation, medical expenses, and asthma diary record. The exercise was defined as the total amount of walking (2 metabolic equivalents (METs)), light exercise (2 METs), moderate exercise (4 METs), heavy exercise (6 METs), and gardening (2 METs). Concerning the dietary intakes, we collected information regarding the consumption of cooked vegetables, raw vegetables, citrus fruits, other fruits, vegetable and fruit mixed juice, vegetable juice, and 100% fruit juice. Raw vegetables referred uncooked, unprocessed vegetables, which are usually organic or wild vegetables. They include uncooked tomatoes, carrots and leafy greens. The amount of intakes was assessed by the conversion of "unit", which was defined as the amount of food held on one hand.
Statistical analyses
We assessed characteristics of participants and their bivariate association with asthma control levels using Pearson's χ 2 test or Fisher's exact test for categorical variables and Student t-test, Mann-Whitney U test, or Kruskal-Wallis test for continuous variables. Additional analyses were conducted, stratified by sex (male and female) and age groups (≤ 64 years and > 64 years). A multiple linear regression model was then constructed to examine the association between asthma control scores and lifestyle-related factors. Two-sided p-values of < 0.05 were regarded as statistically significant. Data analyses were performed with STATA version 11.0 (Lakeway Drive College Station, TX, USA) or SPSS statics version 17.0.0 (IBM Japan, Tokyo, Japan).
Patients' characteristics
The patients' characteristics are shown in Table 1. The mean age of the patients was 64 years, and the average duration of asthma was 18 years. Sixty percent of the patients were atopic, 54.7% of patients were non-smokers, and current smokers accounted for only 6.6%. The comorbidities of the patients included allergic rhinitis (49.5%), allergic dermatitis (13.6%), sinusitis (29.0%), and chronic obstructive pulmonary disease (COPD) (11.0%). Regarding types of treatment they received, 93.2% of patients used inhaled corticosteroid (ICS), and 66.4% of patients used a long acting β2 agonist (LABA). The proportions of patients in the asthma treatment steps as measured by Global Initiative for Asthma (GINA) 2007 in step 1, step 2, step 3, step 4, and step 5 were 5.5%, 17.4%, 7.6%, 60.2%, and 9.4%, respectively. The proportion of patients with total control (ACT = 25), well control (ACT = 20-24), and poor control (ACT < 20) were 27.5%, 48.1%, and 24.5%, respectively ( Table 1 Figure 1). Fifty-five percent of patients in step 5 were poorly controlled ( Figure 1). Although the proportion of poorly controlled patients increased gradually depending on the enhanced treatment steps, a direct association between treatment steps and asthma control was not observed. Table 2 shows the comparisons of median ACT scores by sex, age groups, BMI categories, smoking status, alcohol drinking status, and exercise amounts. The median ACT score was significantly higher in patients aged of 64 years or younger than in patients aged over 65 years (Table 2). More than 60% of patients aged under 64 maintained an ACT score of 25 (total control) (data not shown). Median ACT score was not significantly different among patients in non-smokers, past smokers, and current smokers. However, median ACT score was significantly lower in passive smokers compared to that in non-passive-smokers (p = 0.03). However, passive smoking was excluded by stepwise selection under the multiple linear regression analysis ( Table 3). The median ACT score was also not significantly different among alcohol drinkers and nondrinkers.
Relationships between asthma control and smoking, drinking, and exercise
Regarding exercise, the median ACT score was significantly higher among patients who exercised more than 80 minutes per week compared to that among patients who exercised 80 minutes per week or less (p = 0.006) ( Table 2). In term of the amount of exercise, the median ACT score was significantly higher among patients who exercised more than 3 METs-h per week compared to that among patients who did 3 METs-h per week or less (p = 0.005). Multiple linear regression analysis confirmed the significance of the bivariate analysis ( Table 3).
The relation of asthma control to diet
The comparisons of median ACT scores in levels of various vegetable and fruit intakes are shown in Table 4. The median ACT score was significantly higher among patients who consumed more than 5 units of raw vegetables per week compared to that among patients consuming five units or less of raw vegetables per week (p = 0.02). However, additional analyses stratified by gender and age groups showed that this association was found only in men (p = 0.001) and in patients aged > 64 years (p = 0.005) ( Table 5 and Table 6). Similarly, as shown in Table 7, the median ACT score was significantly higher among patients who consumed > 1 unit of vegetable juice per week compared to that in patients consuming 1 unit or less of vegetable juice per week (p = 0.02), but only in patients aged 64 years or younger. In multiple linear regression analysis, raw fresh vegetable intake remained significantly associated with higher levels of asthma control (p = 0.005) ( Table 8).
Discussion
Several studies have previously reported the relationships between lifestyle factors and asthma incidence [1][2][3][4][5][6][7]. However, few reports have focused on the relationships between asthma control and lifestyle factors. A total of 437 asthmatic patients were interviewed in our outpatient clinic, and the relationships between asthma control and several lifestyle factors were investigated. The relationships of smoking or alcohol drinking with asthma have already been reported in several articles. Radon and colleagues reported that passive smoking was a risk factor for asthma occurrence [3], while Bakirtas reported that passive smoking and low income were risk factors for asthma incidence [4]. Similar results were observed in the present study; patients who were exposed to passive smoking or who could not pay any medical expenses for asthma treatment, had a tendency to poor asthma control (data partly shown). Regarding lifestyle-related factors, González reported that alcohol drinking did not affect asthma control [12]. Similar results were obtained in the present study. Lucas and colleagues insisted on the importance of physical activity on decreases in asthma prevalence [14]. On the other hand, Westermann found that there was no relationship between asthma control and periodic exercise [13]. However, moderate exercise (> 80 min/week) was found to be associated with good asthma control in the present study. The Japanese government has recommended that 4 METs-h/week exercise is required for the prevention of lifestyle-related diseases. In the present study, patients with more than 3 METs-h/week exercise had good asthma control.
Several empirical studies have investigated the effects of dietary intakes on asthma. Frode reported that daily intakes of Lifestyle and Asthma Control PLOS ONE | www.plosone.org fresh fruit or vegetables in infancy decreased the risk of asthma in school-age children [5]. Rodriguez found that increased intakes of saturated fatty acids, myristic and palmitic acids, and butter appeared to be related to the risk of current asthma in children [6]. Other reports mentioned that intakes of α-linolenic acid and a low ratio of n-6:n-3 PUFA were associated with decreased exhaled NO and improved asthma control [8]. Nagel reported that more frequent consumption of fruit, vegetables, and fish was associated with a lower lifetime prevalence of asthma, whereas high burger consumption was associated with higher lifetime asthma prevalence [9]. On the other hand, other investigators reported that there were no clear relationships between dietary patterns and asthma incidence [10,11]. These previous reports focused on the relationships between asthma incidence and diet, while the present study examined the relationships between asthma control and diet. Particularly fresh vegetable, but not heated vegetable, intakes were associated with good asthma control in the present study. The possible explanations for this relationship remain to be investigated. In general, flavonoids and related polyphenolic compounds in vegetables are lost with heating. There is a report that flavonoids and related polyphenolic compounds had significant anti-inflammatory activity [15]. Recently, Wood reported the importance of intakes of antioxidants in vegetables for asthma [16]. Further studies are required to elucidate the relationship between flavonoids or antioxidants and asthma control.
In general, citrus fruits contain more amount of vitamin C than other fruits. Previous reports indicated the relationship between consumption of citrus fruits and incidence of asthma [17,18]. Furthermore, citrus fruits contain anti-inflammatory effect [19]. However, we could not find the relation between the consumption of citrus fruits and asthma control in our study. Although citrus fruits are also included in fruit mixed juice and 100% fruit juice, the relation between asthma control and fruit mixed juice or 100% fruit juice was not observed. One of the possible reasons is the genotype of the patient because citrus fruits may influence the sensitivity of the treatment of asthma [20]. Findings from this study are strengthened by the use of reliable and standardized questionnaire to measure asthma control levels. Diez reported the relationships between asthma control and several risk factors, including sex, race, BMI, smoking, level of education, and habitual activity, in Spanish asthmatic patients [21]. They used the asthma control questionnaire (ACQ) to evaluate asthma control. This questionnaire reflected asthma control for the most recent week. In the present study, we used the ACT questionnaire, which reflects longer term (recent one month) of asthma control than the ACQ. For this reason, we believe that the ACT is better than the ACQ for evaluation of asthma control when comparing lifestyle factors.
The statistical significance of the relation between asthma control and exercise or raw vegetable diet intake was observed in our multiple linear regression analysis. However, the adjusted R squared was 0.049, indicating that the correlation coefficient was relatively weak. Interpretation of the results of our study should be made with caution. Since this study was conducted by only one institution, further multicenter studies are required for universalization of our results.
In conclusion, periodical exercise and raw vegetable intakes are associated with good asthma control in Japanese patients. | 2017-04-14T03:55:36.490Z | 2013-07-09T00:00:00.000 | {
"year": 2013,
"sha1": "d56a5f053bf19c8e2e49a359a39db0cf00ea8c82",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0068290&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d56a5f053bf19c8e2e49a359a39db0cf00ea8c82",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
254095048 | pes2o/s2orc | v3-fos-license | The association between upper gastrointestinal endoscopic findings and internal radiation exposure in residents living in areas affected by the Chernobyl nuclear accident
Many people living around the Chernobyl Nuclear Power Plant (CNPP) have been exposed to 137Cs for several decades after the CNPP accident. Although half-life of 137Cs is about 30 years, some wild forest foodstuffs are contaminated by 137Cs even now. We pointed out in a previous report that low-dose internal radiation has been occasionally detected in people’s body. Moreover, some doctors in local hospitals have claimed that internal exposure from contaminated foodstuffs may affect the digestive organs and possibly cause gastrointestinal (GI) diseases. Thus, we attempt to assess whether internal radiation exposure affects digestive organs or not, and the possible factors that influence digestive organs. Overall, 1,612 residents were assessed for internal 137Cs concentration using Whole-Body Counter and their digestive organs were screened with upper GI endoscopy from 2016–2018 in the Zhytomyr region, Ukraine. All participants answered to the questionnaire including their background, intake of wild forest foodstuff, intake frequency, smoking habits, and alcohol consumption. We checked the number of upper GI endoscopic diagnosis per person to assess the extent of damage to the upper digestive organs. Next, we statistically analyzed associations between this number and age, sex, level of internal exposure dose, alcohol consumption, wild forest foodstuff intake, and smoking. Consequently, we revealed that the number of GI diagnosis is significantly increased by factors such as sex, intake of wild forest foodstuff, and alcohol consumption. However, the average level of internal exposure of 137Cs and smoking did not relate to the number of GI diagnosis. Thus, the results of multiple regression revealed that alcohol consumption is independently related to the number of GI diagnosis that is most likely accompanied by the intake of wild forest foodstuff. In conclusion, the low-dose internal exposure may not affect the digestive organs of residents living around CNPP.
Introduction
The Chernobyl Nuclear Power Plant (CNPP) accident occurred in Ukraine in 1986 during a safety test in the steam turbine of a nuclear reactor. Two employees died immediately at the time of the accident, and 28 more firefighters died within several weeks after receiving lethal doses of ionizing radiation in a brief period. Shortly after the accident, it was claimed that over 50,000 people would die of Chernobyl-induced cancer and some other diseases related to radiation [1]. However, epidemiological studies on the consequences of the Chernobyl accident and its effects on health have reported a considerably smaller number of casualties. In the early 2000s, several prominent global organizations published multiple reports on the effects of low dose radiation on health from the Chernobyl accident. The United States National Research Council published its comprehensive BEIR-VII report dedicated to the effects of low ionizing radiation levels in 2006 [2], along with a series of independent reports from the World Health Organization (WHO), International Atomic Energy Agency (IAEA), and United Nations Scientific Committee on the Effects of Atomic Radiation (UNSCEAR) on the effects of radiation and its consequences for health after the fallout from the Chernobyl accident [3][4][5]. The BEIR-VII 2006 report concluded, "At this time [2006], no conclusion can be drawn concerning the presence or absence of a radiation-related excess of cancer-particularly leukemiaamong Chernobyl accident recovery workers" [2].
Nevertheless, all researchers have agreed that the increase in the rates of thyroid cancer among children under the age of 18 from the affected areas was the consequence of exposure from the fallout of the CNPP [6]. By 2015, more than 20,000 thyroid cancer cases were diagnosed among two million highly contaminated children, who were under 18 at the time of the accident; 15 of them had lethal outcomes [6].
Most epidemiological studies on the effects of radiation on the human body were primarily focused on cancer [7]. Majority of these studies were conducted based on survivors of atomic bombings, participants of radiotherapy procedures, or radiation workers. Radiation effects expressed as non-cancer diseases have been less systematically studied and reported [7]. UNSCEAR implemented major reviews on non-cancer radiation diseases in the 1982 [8] and 1993 [9] UNSCEAR Reports.
The threshold for non-cancer diseases had been considered at dose levels over 4 to 5 Gy, until the Life Span Study (LSS) demonstrated evidence at doses lower than the these [7]. In 1992, the analysis of non-cancer diseases based on mortality data from the LSS cohort and survivors of the atomic bombing in Japan demonstrated a statistically significant association between radiation doses and non-cancer diseases [10]. Furthermore, excessively high risks for mortality from strokes, heart diseases, and respiratory and digestive system diseases were reported.
Analyses of the same LSS non-cancer mortality data from 1965 to 1997 showed little evidence of excess risks below 0.5 Sv. These findings were true for the four major disease categories considered: strokes, coronary heart diseases, and digestive and respiratory diseases [7]. Likewise, Ozasa et al., in their LSS study of victims of atomic bombing conducted from 1950 to 2003, found that the risk of liver cirrhosis and major digestive diseases did not show any increased radiation risks during the whole period or for the period after 1965 (Excessive Relative Risk [ERR]/Gy = 0.11, 95% confidence interval [CI]: -0.07, 0.34 and 0.17, 95% CI: -0.04, 0.42) [11].
There is significant uncertainty about the shape of the dose-response relationship at low doses. The current estimates for lifetime risks are presented for exposure at 1 Sv, where they are barely affected by the shape of the dose-response. The magnitude of the risk of non-cancer diseases at lower dose levels (for example, 0.5 Sv) is ambivalent [7]. Ozasa et al., in their 2012 study of the mortality of Atomic Bomb Survivors, pointed out that the increased risks of nonneoplastic diseases, including the circulatory, respiratory, and digestive systems were observed; however, whether these were causal relationships required further investigation [11].
Some studies showed evidence of the effects of radiation on GI diseases. However, all of them only investigated the effects of external radiation, mostly finding evidence in cohorts with high dose irradiation. Therefore, their findings are not necessarily comparable with ours because our study focuses on internal exposure at relatively low doses that originates from the intake of foodstuff contaminated by 137 Cs in affected areas around Chernobyl. Moreover, until recently, there have been no studies dedicated to studying the effects of internal radiation on digestive organs.
A few studies investigating the presence of radioactivity in the bodies of people living in contaminated areas around CNPP, showed that a substantial percentage of the populationalmost 50% in the beginning of the study year-had some level of radiation [12,13]. A study conducted from 1996 to 2008 found that 513 participants or 0.35% of the study population had an annual internal radiation dose exceeding 1 mSv, which is the dose limit set by the International Commission on Radiological Protection for the general public [12]. Despite our study was conducted from 2009 to 2018, on screening the residents around CNPP for internal radiation, we found fewer residents (53 participants, 0.02%) with higher levels of dose and radiation detected in their bodies. Consequently, there is still uncertainty regarding the effects of chronic low-dose internal radiation and its health outcomes [13].
Some doctors working in clinics in contaminated areas claim and suspect that internal radiation deteriorates the functions of stomach and may adversely affect the GI system of the human body. Therefore, we attempt to assess whether low-dose internal radiation exposure affects digestive organs, and the possible factors influencing the digestive organs of residents around CNPP. The findings of upper GI endoscopic examination were interpreted according to the International Classification of Diseases (ICD) of World Health Organization (WHO). Common upper GI endoscopic diagnoses such as gastritis, duodenitis, duodenogastric reflux, gastroesophageal reflux, stomach ulcer, diaphragmatic hernia, and other diagnoses, including a few cases of cancer, were diagnosed in a wide range of combinations among most of the screened participants. Our study identifies a possible association between the detected number of upper GI endoscopic diagnosis and low-dose internal exposure in participants living in areas contaminated by the Chernobyl accident. This study is one of the first to conduct an investigation of the effects of internal radiation exposure on the GI organs.
Materials and methods
We conducted this study from July 2016 to February 2018 at the Medical Center of Korosten city. The participants were residents of Korosten city and eight subordinated districts of the Zhytomyr region in Ukraine. As of January 1, 2019, the population of the study area was approximately 323,000. Korosten city is the largest settlement, with over 63,000 people. The study area is located to the west of the CNPP, and the fallout from the accident significantly contaminated it. Our study included the settlements that were between 40-150 kilometers west from the nuclear power plant.
In total, 1,612 people residing within the research area participated in this study. All of them were under the health care surveillance of the Zhytomyr Inter-Area Medical Diagnostic Center (Medical Center) that provides health care services to the residents our research area. We invited all patients who sought medical assistance in the Medical Center for any upper GI symptoms or digestive organ disorders that required GI endoscopic intervention during the study period. Residency registration within the research area at the moment of examination was mandatory. Those who agreed to participate in the study, initially received a detailed description of the process and content of the study and were then asked to provide written consent. Afterward, they first completed a questionnaire regarding their lifestyle and dietary habits. Questionnaires were distributed in hard copies and prepared in the Russian language. It consisted of four pages in A4 size and included the respondent's name, address, date of birth, informed consent, milk and forest food intake, alcohol consumption, smoking habits and their frequency. Once they finished filling the questionnaire, they were invited to udergo upper GI endoscopy and measurement of their internal body burden on the Whole-Body Counter (WBC). All the data collected from the questionnaires, GI endoscopic examinations, and WBC measurements were then used to assess their effects of internal exposure on the upper GI endoscopic findings and identify their associations and contributions.
To assess the internal exposure dose of participants, we used a WBC manufactured by Aloka Co., Ltd (Japan), equipped with a 7.6 cm diameter NaI (TI) detector. This WBC has an adjustable seat for height and angle so that the examinee can place their abdomen on the detector. The minimum detectable radioactivity level of 137 Cs on this WBC was 270 Bq per body. As for the upper GI examination, all endoscopy screenings were implemented on professional GI endoscopy equipment made by the OLYMPUS Company (Japan). Upper GI endoscopic findings in participants were diagnosed by professional gastroenterologists according to the ICD of WHO. We used the number of upper GI endoscopic diagnosis per person to indicate the extent of GI damages. The number of upper GI endoscopic diagnosis detected in one participant in our dataset varied from 0 to 5 diagnoses. Each participant could have various combinations of upper GI diseases, such as gastritis, duodenitis, duodenogastric reflux, gastroesophageal reflux, stomach ulcer, diaphragmatic hernia, and so on. We considered the number of upper GI endoscopic diagnosis detected in one person and assessed the effect of internal exposure and other factors on the increase of upper GI disease. All measurement procedures and endoscopy screenings were performed by qualified medical personnel at the Medical Center. Measured levels of radioactivity in the body and the number of detected upper GI endoscopic diagnosis in each participants were first written down on hard copies of registry cards of each participant by medical specialists. Thereafter, they were transferred into the Excel format, along with the appropriate information from the questionnaire for further statistical analysis on professional software.
All the data was cleaned, filtered and grouped by certain characteristics, such as sex, age, number of detected upper GI endoscopic diagnosis, wild forest food and alcohol intake, and smoking habits. We also converted Bq/body from the WBC into Bq/kg for each individual and subsequently stratified them into two groups-participants with detectable level of 137 Cs and participants with non-detectable level of radioactivity. When the internal exposure of the participants was below detectable levels, they were considered and qualified as "0 Bq." The relevant and necessary statistical tests were conducted and represented in the appropriate way. All statistical analyses were performed on IBM SPSS Statistics 25.0 software. The Mann-Whitney U test and Chi-square tests were used for statistical significance and the determination of averages and proportions. We also ran correlation tests and univariate regression analysis to test the contributions of several variables. P-values lower than 0.05 were considered significant.
Following the "Ethical Guidelines for Medical and Health Research Involving Human Subjects" published by the Ministry of Education, Culture, Sports, Science, and Technology and the Ministry of Health, Labor and Welfare, this study was approved by the Ethics Committee at Nagasaki University Graduate School of Biomedical Sciences (approval no.: 16062493-4) on March 31, 2021. Informed consent was obtained from each individual through a written form, that indicated agreement for participation in the research. All relevant data excluding the personal information of patients in this research are available upon request.
Inclusivity in global research
Additional information regarding the ethical, cultural, and scientific considerations specific to inclusivity in global research is included in the Supporting Information (S1 Questionnaire).
Results
About 1,612 participants took part in the study and underwent the WBC screening and GI endoscopy. The general information of the participants is presented in Table 1. Among the participants, 36% were men, and 64% were women. The average age of the participants was 49 years, although the average age of the female group was (51 years), significantly higher than that of men (46 years) (p<0.001). The number of upper GI endoscopic diagnosis detected in one person ranged from 0-5 diagnoses. Almost all participants had some type of diagnosis, except for 16 participants, who had "0" diagnosis and were classified as "healthy." Among these healthy participants, two were male, and the rest were female. The average number of upper GI endoscopic diagnosis for the entire study population was two, though men had a significantly higher average number (2.2±0.8) than women (1.9±0.8). Table 2 represents the average levels of internal radiation from 137 Cs for the entire population, and individually for men and women. The average Bq/kg of internal exposure detected in all participants was 6.2±11.8 Bq/kg. Although the average level of internal radiation in men was higher compared to women, it is not significantly higher (6.7 Bq/kg) than women's average internal exposure (5.9 Bq/kg) (p = 0.182). Similarly, men had a higher proportion of those with detectable levels (33%) than in women (29%), again with no significant difference (p = 0.067). Fig 1A illustrates the distribution of the age ranges of the whole population in numbers and proportion. The 81-83 years age range was the smallest with only 6 individuals and accounted for less than 0.1%, followed by the age range of 18-20 years, accounting for 3% of the entire population. The largest age ranges by number of participants were 51-60 years and 61-70 years, comprising 24% and 22% respectively. We also analyzed the distribution of the numbers of upper GI endoscopic diagnosis detected in one individual (Fig 1B). It shows the number and proportion of participants categorized by the number of upper GI endoscopic diagnosis detected in one person. In our data set, the number of upper GI endoscopic diagnosis detected in an individual ranged from 0-5. The prevalent number of upper GI endoscopic diagnosis detected in an individual was two, accounting for 50%, and the least prevalent was five, accounting for less than 0.1%. Fig 1B shows that more than 70% of the study population had two or more diagnoses. The types of upper GI findings varied to a vast extent. However, all of them were commonly prevalent in the general population, except eight rare cases of cancers (six stomach cancers and two esophagus cancers). The most frequently detected upper GI endoscopic diagnosis throughout our study were gastritis, duodenitis, duodenogastric reflux, gastroesophageal reflux, stomach ulcer, diaphragmatic hernia, and other rare diagnosis. However, we did not investigate the characteristics of upper GI endoscopic diagnosis as a part of this study. Fig 2 shows the average Bq/kg and the proportion of number of the upper GI endoscopic diagnosis in each age range group. Young people aged 18-20 years were dominantly detected for 1 and 2 diagnoses. However, the proportion of number of upper GI endoscopic diagnosis increased with the increasing age of the groups. This tendency can be seen in the proportions of 3 and 4 diagnoses. However, the proportion of 5 diagnoses was detected only in two age groups, namely 51-60 years and 61-70 years. We also assessed the significance of the average Bq/kg between age groups using ANOVA test. The ANOVA multiple Comparisons Tamhane test showed no significant difference between age groups (p = 0.461). The highest average Bq/kg was observed in the group of 81-83 years old. However, this group may not be representative, as it only contains 6 participants, while all other groups consist of 53-380 participants. If the age group of 81-83 years is excluded, then the younger the age, the higher the level of internal radiation exposure. The middle and elderly age groups had relatively lower exposure to 137 Cs. Fig 3 demonstrates the average Bq/kg and percentage of participants with 137 Cs exposure above the WBC's detection limit for each number of upper GI endoscopic diagnosis in the group. It shows the possibility of a relationship between the number of upper GI endoscopic diagnosis and percentage of participants with detectable internal exposure. The percentage of people with detectable radiation increased with the increasing number of upper GI endoscopic diagnosis groups, though the percentage of people with detectable radiation decreased notably in the last group. The correlation coefficient between the average Bq/kg and each number of the upper GI endoscopic diagnosis group was 0.038 (Pearson correlation, p>0.05), suggesting almost no correlation and no significance. Table 3 shows the results of univariate regression analysis in which the number of upper GI endoscopic diagnosis was a dependent variable. In contrast, age, sex, Bq/kg, wild forest food intake, alcohol consumption, and smoking were independent variables. This analysis evaluates whether these independent variables affect the increase in the number of upper GI endoscopic diagnosis in individuals. It also shows the level of contribution of each factor and its significance. Table 3 represents the results of the regression where factors such age, sex, intake of wild food, and alcohol consumption contribute significantly to the increase in the number of upper GI endoscopic diagnosis. Factors such as Bq/kg and smoking did not affect the increase in the number of upper GI endoscopic diagnosis. The adjusted R 2 for this regression analysis was 0.050.
We also performed regression analysis for women separately, as it could reveal an interaction between alcohol consumption and wild food intake, assuming that alcohol consumption is lower in women compared to men. Regression analysis for the women-only group revealed results similar to Table 3, indicating significance for age, alcohol consumption, intake of wild food (p<0.05). However, smoking and the level of Bq/kg seemingly did not affect the number of upper GI endoscopic diagnosis (p>0.05).
Discussion
In this study, the key focus was on internal exposure emitted from 137 Cs and its potential effects on the GI system of the human body. A majority of studies concerning the CNPP accident that investigate the effects of radiation on the human body have concentrated their primary interests in 137 Cs and internal exposure. 137 Cs has a greater effect on people than other radionuclides due to its properties, such as a long half-life, the amount released into the environment and dispersion in a wider area. Internal exposure from 137 Cs, as opposed to dose rates from external exposure, decreases more slowly in the general population and its contribution to total body exposure increases gradually [14]. Therefore, our study attempts to identify the potential effect of low internal doses of 137 Cs on the digestive organs of the human body, as it is known that 137 Cs accumulates in muscles and visceral organs. In Semoshkina et al's study, it was reported that 137 Cs was highly transferred to the spleen, lungs, heart, muscles, kidneys, skin and bones in horse tissue taken 90 days after the beginning of radionucllide administration [15]. We assume that a majority of the residents have similar patterns and frequencies of internal exposure from 137 Cs in the body, that continuously varies over time. This assumption is based on as previous 10-year study from the same area [13].
This study's participants were aged between 18-83 years. The number of participants was relatively well distributed. They predominantly consisted of women, whose average levels of internal radiation were lower than that of men, which is consistent with other preceding studies in the same field and area [12,13]. The proportion of women with detectable levels of internal radiation was also lower than men, because women tend to be vigilant and tend avoid risks [16]. The average levels of internal Bq/kg and the proportion of detected with radioactivity among men were higher than those among women. However, the differences in the average internal Bq/kg in men and women were not significant (p>0.05), as well as the proportion of detectable people was not significant (p>0.05).
Furthermore, we examined the percentage of upper GI positive findings in men and women, that revealed that men have a higher percentage of upper GI positive diagnosis with significant differences (p<0.05). Additionally, we found that men have a significantly larger average number of upper GI endoscopic diagnosis than women (p<0.01). This finding is in line with previous studies that confirm that men tend to have a larger number of background digestive diseases by nature as opposed to women [7]. In this study, the average number of upper GI endoscopic diagnosis detected in one person was two for the entire population. Only 16 participants were diagnosed negative, while 434 were detected with 1 diagnosis. The rest (72%) of the population had two or more and up to 5 diagnosis per person. We inspected the prevalence of the numbers of upper GI endoscopic diagnosis in different age groups and their average levels of internal radiation in Bq/kg with intention to identify the existence of relationships between them. The proportions of the numbers of upper GI endoscopic diagnosis in different age range groups revealed a gradual increase in the proportions of higher numbers of upper GI endoscopic diagnosis, particularly the proportions of 3 and 4 diagnoses in older adults groups, though there was no significant difference (p>0.05). On the contrary, the average level of internal radiation in Bq/kg decreased as the age range of groups increased, excluding the highest age group that contained only six participants and was unlikely to be credible for comparison. The fact that older adults in the Chernobyl area have low internal exposure than young people was also confirmed in the previous study, that examined body burden for 10 years in a large number of residents in the same Zhytomyr region [13]. Considering the results shown in Fig 2, we believe that the Bq/kg has no association with the increase in the number of upper GI endoscopic diagnosis, that was also statistically insignificant. Instead, we are more likely to attribute these findings to the aging process of humans, that is in line with many studies devoted to GI systems in older adults.
Meanwhile, Cosset et al. reported that patients receiving infra-diaphragmatic high-dose irradiation therapy had developed various late radiation GI injuries: stomach and duodenum ulcers, severe gastritis, small bowel obstructions, and perforations; moreover, fewer patients developed two injuries at the same time [17]. Similarly, Kavanagh et al. found that doses of radiotherapy on the order of 45 Gy to the whole stomach are associated with late effects, primarily ulceration in the stomach and small bowels in approximately 5% to 7% of patients, respectively [18]. Dumic et al., in their study, stated that "gastrointestinal (GI) changes in the elderly are common", and "despite some GI disorders being more prevalent in the elderly" [19].
Besides, we examined the relationships between the number of upper GI endoscopic diagnosis and the proportion of people with detectable radioactivity, and the average Bq/kg. We first stratified the entire population into six groups by the number of upper GI endoscopic diagnosis and calculated the percentage of peopel with detectable radioactivity and average Bq/kg for each group. We then conducted the correlation analysis between Bq/kg and the number of upper GI endoscopic diagnosis that was 0.038 (Pearson correlation), demonstrating that there is nearly no relation (p>0.05). Additionally, we searched for other similar studies that could confirm or reject our results. However, we could not find similar studies that could provide either supporting or controversial references. There are only studies dedicated to the effects of external radiation on non-cancer diseases, reporting that the magnitude of the risk of non-cancer diseases at lower dose levels (for example, 0.5 Sv) is very uncertain [7].
It is widely known that the intake of wild forest food containing 137 Cs will carry particles into the body, that then remain for a certain period and may gradually increase if accumulated in the body [20]. Consequently, if internal radiation is likely to cause a GI disorder or disease, people who frequently consume contaminated food and have internal radiation may have a higher average number of upper GI endoscopic diagnosis. Therefore, we conducted a statistical calculation to determine whether the group with detectable levels of internal radiation had a higher average number of upper GI endoscopic diagnosis than the group with undetectable levels. There is nearly no age difference between these groups. In contrast, the average age of the group with detectable radioactivity was even younger. The group with detectable levels of internal radiation had a higher average number of upper GI endoscopic diagnosis (2.1) than that in undetectable groups (1.9), and the difference between the two groups was statistically significant (p<0.001). However, the levels of Bq/kg detected in the participants does not contribute to the increase in the number of upper GI endoscopic diagnosis. Given these results, we assumed that the intake of wild forest food might often be accompanied by alcohol intake as a celebration of successful harvests, hunting, or fishing. This, in turn, would increase the frequency of alcohol intake and consequently increase the number of upper GI endoscopic diagnosis in people with detectable levels of radioactivity will increase. To be more precise, we examined whether the wild food consumers have a higher proportion of alcohol cosumption. The Chi-square test results showed that there was a higher proportion of alcohol drinkers among those who consumed wild forest food. They showed significant differences (p<0.05), suggesting that the increased number of upper GI endoscopic diagnosis in groups with detectable radioactivity are more likely to have originated due to alcohol consumption.
We conducted a regression analysis involving almost all the main characteristics of our population, that revealed that sex, intake of wild forest food, and alcohol consumption significantly affects the number of upper GI endoscopic diagnosis (p<0.01). Men tend to have a higher number of upper GI endoscopic diagnosis than women, in line with the results of our sex analysis shown above (p<0.05). Intake of wild forest foodstuff significantly affects the number of upper GI endoscopic diagnosis, consistent with the results shown in Table 2 of detectable and not-detectable groups (p<0.01). We attribute this phenomenon to alcohol consumption, as people consuming forest and wild food tend to be alcohol drinkers, as mentioned above. We believe that the intake of wild food and its relationship with GI findings should be investigated more precisely. Expectedly, the analysis of Bq/kg and the number of upper GI endoscopic diagnosis showed that the former does not contribute to the increase in the number of upper GI endoscopic diagnosis.
Alcohol consumption clearly affected the number of upper GI endoscopic diagnosis and was statistically significant (p<0.001). Studies on alcohol consumption and its effects on various organs have already been proven to cause damage to the digestive organs. Bishehsari et al. point out that alcohol-induced intestinal inflammation may be at the root of multiple organ dysfunctions and chronic disorders associated with alcohol consumption, including chronic liver disease, neurological disease, GI cancers, and inflammatory bowel syndrome [21]. Smoking, on the contrary, does not show statistical evidence that it affects the number of upper GI endoscopic diagnosis. Further, it is worth underscoring that the adjusted R 2 was 0.050, indicating that all independent items can together explain only a tiny portion of the increase in the number of upper GI endoscopic diagnosis.
There are some limitations in our study. First, there is no control cohort group to compare and estimate excessive risks for upper GI endoscopic diagnosis induced by internal radiation. However, we found no relationship between the internal radiation dose and number of upper GI endoscopic diagnosis in this study. Second, we did not conduct follow-ups with the participants over a longer research period or estimate their life-long accumulated low-dose radiation. Additionally, we also did not assess the effect of life-long radiation on the increase of in the number upper GI endoscopic diagnosis. However, we admit that these points should be considered and included in future studies. Third, our participants were those who sought medical assistance from the Medical Center with some symptoms or underlying disorders in their digestive systems. If we assess the number of upper GI endoscopic diagnosis in the general population, we can obtain more exact information, but may also face ethical challenges as GI endoscopy is highly invasive. Next, there may be some other confounding factors apart from those that we considered. We did not sort classifications of diseases nor examine their types, frequency, and causes. Therefore, we assessed the association of radiation with GI findings using the number of upper GI endoscopic diagnosis as an indication of GI lesions for assessing the effects. We assumed that if the GI tract is affected by the extent of chronic internal irradiation, it is likely to cause various GI injuries in each person in contaminated areas.
Our study shows that the number of upper GI endoscopic diagnosis was significantly affected by alcohol consumption. We assume that the number of upper GI endoscopic diagnosis may be an appropriate indicator of the extent of GI lesions.
In conclusion, we found that there is no obvious relationship between the increase in the number of upper GI endoscopic diagnosis and internal exposure, specifically with Bq/kg. However, there is strong evidence that alcohol consumption is more likely to affect the increase in the number of upper GI endoscopic diagnosis. There is still ambiguity regarding the effect of low-dose radiation on non-cancer diseases, particularly on the GI organs. Discussions and investigations are still ongoing in the field of low dose effects on the health of human beings. This issue requires further rigorous investigation and various research methods that can ensure more reliable evidence and underpin related hypotheses. | 2022-12-01T06:17:54.391Z | 2022-11-30T00:00:00.000 | {
"year": 2022,
"sha1": "88a0c2e4daf02a693f9780258f375e64779c759a",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0278403&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ce4830d9be587ec0a621b2b04068c87f3aa74021",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
268553030 | pes2o/s2orc | v3-fos-license | Global spatial dataset of mangrove genus distribution in seaward and riverine margins
Mangroves are nature-based solutions for coastal protection however their ability to attenuate waves and stabilise and accrete sediment varies with their species-specific architecture and frontal area. Hydrodynamic models are typically used to predict and assess the protection afforded by mangroves, but without species or genus distribution information, the results can be significantly different from reality. Data on the frontal genus of mangroves exposed to waves and tides can provide information that can be used in hydrodynamic models to more accurately forecast the protection benefit provided by mangroves. Globally, frontal species were identified from existing mangrove zonation diagrams to create a global mangrove genus distribution map. This dataset aims to improve the accuracy of hydrodynamic models. Data may be of interest to researchers in coastal engineering, marine science, wetland ecology and blue carbon.
Background & Summary
Globally, cities are looking to adopt nature-based solutions for coastal protection 1 , mitigating the increasing erosion and flooding caused by climate change 2 .Mangroves have received much attention, but their ability to attenuate waves and reduce flooding is highly variable based on their areal extent, water depth and stage of the tide 3 , density 4 , and genus-specific architecture 5 .Species in genera with large aboveground root systems, and therefore frontal areas, such as Rhizophora spp.have reported attenuating nearshore waves heights by up to 70% 6 , whereas species in genera with a much smaller frontal area such as Avicennia spp., typically have a reduced effect on waves comparatively 7 .To identify the viability of mangrove forests as nature-based solutions, hydrodynamic models used to predict erosion and flooding need to account for mangrove tree architecture variability based on characteristics of genera and species.However, there is currently no global map of the distribution of mangrove genera/species that occupy the seaward margins of mangroves.
Hydrodynamic models are used to predict the effect of vegetation on flooding and erosion of coastlines but typically represent vegetation such as mangroves as a 'drag coefficient' 8 or as '2D rigid cylinders' [9][10][11] .While both of these methods are state of the art, they are not accurate unless they are specific to genera/species of mangrove which are exposed to the waves, termed the 'frontal species' 12 .The ability of mangroves to withstand coastal hazards is primarily driven by the influence of the architecture of the frontal vegetation within a forest.While there is much data identifying the architecture of specific mangrove genera and species 13,14 and datasets showing mangrove distribution 15 or biomass and canopy height 16 , there is currently no spatial dataset highlighting the distribution of mangrove genera/species.
Mangrove species typically occur in a discrete order (seaward to landward) based on intertidal environmental factors, including hydroperiod, salinity, soil-type, sedimentation, nutrient availability, propagule predation [17][18][19][20] and propagule dispersal 21,22 .Geomorphologists have typically created mangrove zonation diagrams that illustrate the position within the intertidal zone or 'order' in which the mangrove species/genera exist, which is known to vary within and among regions 23 .Although these diagrams are qualitative, they provide an important as yet unutilised resource.This dataset therefore brings together information from zonation patterns described globally to create a single spatial layer of mangrove frontal species.This dataset provides a global spatial layer identifying the frontal mangrove species for each marine ecoregion of the world (MEOW) 24 .The outputs include a spatial layer of frontal mangrove genera, and species where available, a comprehensive dataset of all mangrove zonation data and an interactive spatial model illustrating the location of zonation diagrams using ArcGIS StoryMaps.
The benefits of this spatial layer are two-fold; 1) a new global spatial layer to include the species of mangroves, and 2) this can be used in coastal engineering models to more accurately prescribe the roughness factor, drag or architecture in coastal engineering models.This spatial layer has value for both marine ecologists, coastal engineers and conservation scientists.
Methods
A systematic literature search was conducted to identify published diagrams outlining the mangrove species zonation observed for a given area.The data extracted from these diagrams included the location, mangrove species present, discrete order of the mangrove species within the intertidal zone (from their seaward to landward location) and marine ecoregion 24 .ArcMap 10.8 25 software was used to develop the Bunting, et al. 15 mangrove presence spatial data into a mangrove species-specific map, categorised by marine ecoregions 24 .
Structured literature search.A systematic literature search was conducted using the University of Queensland's library search tool, which queries databases such as Web of Science and SCOPUS.The search was conducted on and before 1 st November 2022.
Search criteria.Topic search criteria included the following terms: ('country name') AND ('mangrove + zonation' OR 'mangrove + profile').All countries that have an ocean border (141 total) were included in the 'country name' search criteria.If any combination of two or more search terms appeared in the title or abstract, the article was shortlisted and later read in total to identify eligibility.
Eligibility for inclusion.
Articles returned by the literature search were included in the meta-analysis if they included a diagram or image of a mangrove profile illustrating the zonation of different mangrove species.The review yielded 195 eligible studies and 510 zonation diagrams.
Attaching locations to each zonation observation.Specific locations of the zonation observation were recorded for 68 of the diagrams.For most of the zonation diagrams, the exact latitude and longitude were not recorded but included the location name.Google Earth was used to visually inspect the area outlined in the article and a location where mangroves appeared to be present was selected.Observations whose locations have been selected in this manner have been highlighted in the original Excel dataset.
Relevance for inclusion. Google Earth was used to visually inspect each the location of each mangrove
transect to ensure mangroves were present in the area and relevant.For instances where a mangrove zonation diagram was relevant to a region larger than several MEOW such as for 'climatic regions' , these data were omitted from the spatial layer but still included in the original Excel dataset.These observations have been highlighted by including a description of where they are relevant but with 'NA' in the latitude and longitude.
Developing the spatial layer.Using ArcMap 10.8, the marine ecoregion 24 spatial layer was overlayed onto the mangrove presence spatial layer 15 .The marine ecoregions mapped by Spalding, et al. 24 were used as a proxy for the varying conditions which may contribute to mangrove zonation.Marine ecoregions where mangroves from Bunting, et al. 15 were not present, were removed from the dataset.
Using the latitude and longitude of each mangrove zonation observation, a frontal species was assigned to the relevant marine ecoregion (Fig. 1).Mangrove frontal species were selected based on the diagram from the relevant marine ecoregion.Where multiple diagrams exist for the same marine ecoregion generally showing similar zonation patterns, the most common species was adopted.Where numerous diagrams existed for the same marine ecoregion, but there were few similarities amongst genus zonation patterns reported, the marine ecoregion was divided into separate polygons to accommodate the site-specific frontal genera.For these areas, the MEOW polygon was split equidistant from each key record of frontal species.For these areas, the MEOW polygon was split equidistant from each key frontal species.Where no data exists for a given ecoregion, no mangrove species was attached.
Data records
The frontal mangrove genus spatial dataset 26 is available at the PANGAEA online data repository in the form of a shapefile: FrontalMangroveSpecies.shp (Fig. 2).Each record (each line in the associated attribute table) corresponds to a marine ecoregion with three columns, 'Genus_1' , 'Genus_2' and 'Genus_3' showing the dominant frontal genus.In several instances, multiple mangrove zonation diagrams existed within the same marine ecoregion, so the most common frontal mangrove species was adopted.Where several species occurred within the same region, if there was an obvious spatial divide, the marine ecoregion was split.Where there were multiple species of approximately equal numbers in the same ecoregion occurring at approximately the same location, each dominant genus was listed in a column in no particular order.
The original data derived from the mangrove zonation diagrams are available in an Excel file: MangroveZonationData.xlsx.Each record (each line in the file) corresponds to a mangrove zonation diagram published in the literature.The total number of records in the file is 510 across 195 articles.
An ArcGIS Story Map has been developed allowing the reader to view the original mangrove zonation diagrams with reference to their spatial relevance 27 .
technical Validation
The coordinates of all published observations of mangrove zonation included in this dataset were validated using Google Earth to confirm the presence of mangroves at the respective locations.
Usage Notes
The shapefile format of the provided dataset enables linking the dataset to other spatial datasets, including shapefiles and rasters.The Excel format of the provided dataset enables data reproducibility and future updating of the shapefile.
Fig. 1
Fig. 1 Availability of mangrove species zonation data for each marine ecoregion of the world.
Fig. 2
Fig. 2 The distribution of the dominant mangrove genera that form seaward fringing (frontal stands) for each marine ecoregion of the world.The light blue shading (159) represents the Marine Ecoregions of the World where studies of mangrove zonation were not found.Values in parentheses indicate the number of regions for that genus. | 2024-03-22T06:18:28.492Z | 2024-03-20T00:00:00.000 | {
"year": 2024,
"sha1": "e097aec490618befc101231f625ef4b473d4f97b",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1038/s41597-024-03134-1",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f758c919c07cd8f0c21e4927f4bf5d409a8d62a0",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
252443895 | pes2o/s2orc | v3-fos-license | Bioimaging Nucleic-Acid Aptamers with Different Specificities in Human Glioblastoma Tissues Highlights Tumoral Heterogeneity
Nucleic-acid aptamers are of strong interest for diagnosis and therapy. Compared with antibodies, they are smaller, stable upon variations in temperature, easy to modify, and have higher tissue-penetration abilities. However, they have been little described as detection probes in histology studies of human tissue sections. In this study, we performed fluorescence imaging with two aptamers targeting cell-surface receptors EGFR and integrin α5β1, both involved in the aggressiveness of glioblastoma. The aptamers’ cell-binding specificities were confirmed using confocal imaging. The affinities of aptamers for glioblastoma cells expressing these receptors were in the 100–300 nM range. The two aptamers were then used to detect EGFR and integrin α5β1 in human glioblastoma tissues and compared with antibody labeling. Our aptafluorescence assays proved to be able to very easily reveal, in a one-step process, not only inter-tumoral glioblastoma heterogeneity (differences observed at the population level) but also intra-tumoral heterogeneity (differences among cells within individual tumors) when aptamers with different specificities were used simultaneously in multiplexing labeling experiments. The discussion also addresses the strengths and limitations of nucleic-acid aptamers for biomarker detection in histology.
Introduction
Conventional immunohistochemistry (IHC) is a standard diagnostic process in tissue pathology that complements hematoxylin-eosin staining and is commonly used for tumor diagnosis, guiding patient stratification and treatment decision. This tissue-based technique is, however, limited by the labeling of only one biomarker per section of tissue. Yet, unique marker characterization is slowly becoming replaced by tumoral molecular signatures based on mRNA and protein expression data. Multiplex tissue imaging allows the detection of multiple biomarkers in the same tissue section to be performed, revealing the spatial relationships among the cells expressing these biomarkers. Various antibody-based approaches have been developed to detect together several antigens in tissue samples [1][2][3]. The most common methods use sequential colorimetric or fluorescent staining. Briefly, the classical IHC approach relies on the use of a primary antibody to detect the target of interest and an anti-species secondary antibody labeled with an enzyme or a fluorophore for signal detection. For an example of immunofluorescent detection, horse-radish peroxidase can be Pharmaceutics 2022, 14,1980 3 of 18 based on tyrosine kinase inhibitors and monoclonal antibodies [21,29]. Integrins, a family of αβ heterodimeric transmembrane cell-surface adhesion and signaling receptors, are implicated in cell-cell and cell-matrix communication and are expressed in all nucleated cells of multi-cellular animals [30]. In vertebrates, integrins synergize with other receptors, including RTKs. Frequently overexpressed in solid tumors, integrins promote cell survival, proliferation, invasion, and stemness maintenance and are major actors in disease progression and resistance to therapies [31][32][33][34][35]. In GBM, several integrins are overexpressed in tumoral and endothelial cells [36]. Higher expression levels of the fibronectin receptor, integrin α5β1, are observed in GBM tissue compared with adjacent normal brain tissue [37]. This overexpression was associated with GBM aggressiveness at the RNA [38][39][40] and protein levels [41].
EGFR and integrin α5β1 are two cell-surface receptors that share common features in their signaling pathways, leading to the development of compensatory mechanisms implicated in resistance to therapies targeting RTKs [32]. They are targets of therapeutic interest in the fight against the emergence of resistance. Inhibiting these receptors individually displayed poor results in GBM clinical trials [21].
However, combined targeted therapies would certainly prove to be more effective for this highly heterogeneous tumor [42], which emphasizes the importance of patient selection for personalized treatments. Molecular imaging techniques are needed for detecting GBM biomarkers. Our study focused on the use of fluorophore-conjugated nucleic-acid aptamers targeting EGFR and the α5β1 integrin as detection tools on GBM cells and tissues. Target expression and aptamer binding were first validated in cell lines using flow cytometry and confocal imaging. Aptamers were then further compared to antibodies and used in mono-or multiplexing experiments on formalin-fixed and paraffin-embedded human brain tissues to highlight tumoral heterogeneity. Figure 1 illustrates the experimental design of our study.
ceptor tyrosine kinases (RTKs), drives the development of solid tumors [27]. Its overexpression leads to aberrant signaling pathways promoting tumor-cell proliferation, growth, survival, differentiation, and angiogenesis. In GBM, EGFR is amplified and/or mutated in more than 40% of cases [28]. After those targeting VEGF (vascular endothelial growth factor) and VEGFR (VEGF receptor), the most frequently reported drugs in GBM targeted therapies are those targeting EGFR. Forty clinical trials in phases II-IV reported in the last 20 years were based on tyrosine kinase inhibitors and monoclonal antibodies [21,29]. Integrins, a family of αβ heterodimeric transmembrane cell-surface adhesion and signaling receptors, are implicated in cell-cell and cell-matrix communication and are expressed in all nucleated cells of multi-cellular animals [30]. In vertebrates, integrins synergize with other receptors, including RTKs. Frequently overexpressed in solid tumors, integrins promote cell survival, proliferation, invasion, and stemness maintenance and are major actors in disease progression and resistance to therapies [31][32][33][34][35]. In GBM, several integrins are overexpressed in tumoral and endothelial cells [36]. Higher expression levels of the fibronectin receptor, integrin α5β1, are observed in GBM tissue compared with adjacent normal brain tissue [37]. This overexpression was associated with GBM aggressiveness at the RNA [38][39][40] and protein levels [41].
EGFR and integrin α5β1 are two cell-surface receptors that share common features in their signaling pathways, leading to the development of compensatory mechanisms implicated in resistance to therapies targeting RTKs [32]. They are targets of therapeutic interest in the fight against the emergence of resistance. Inhibiting these receptors individually displayed poor results in GBM clinical trials [21].
However, combined targeted therapies would certainly prove to be more effective for this highly heterogeneous tumor [42], which emphasizes the importance of patient selection for personalized treatments. Molecular imaging techniques are needed for detecting GBM biomarkers. Our study focused on the use of fluorophore-conjugated nucleic-acid aptamers targeting EGFR and the α5β1 integrin as detection tools on GBM cells and tissues. Target expression and aptamer binding were first validated in cell lines using flow cytometry and confocal imaging. Aptamers were then further compared to antibodies and used in mono-or multiplexing experiments on formalin-fixed and paraffin-embedded human brain tissues to highlight tumoral heterogeneity. Figure 1 illustrates the experimental design of our study. Figure 1. Experimental scheme illustrating the aptafluorescence experiments. After mounting GBM cells or tissues on glass, cells or tissues were incubated with aptamers covalently conjugated to fluorophores. Two aptamers with different specificities were used in this study: aptamer E07 to detect EGFR and aptamer H02 to detect integrin α5β1. At the end of this manuscript, we also describe a technique in which both aptamers were simultaneously incubated on GBM tissues (multiplexing experiments). Fluorescence microscopy was then realized for bioimaging. Drawings are not to scale.
Materials
All nucleic-acid aptamers and chemicals were purchased from IBA Lifesciences (Goettingen, Germany), Eurogentec (Seraing, Belgium), and Sigma-Aldrich (Hamburg, Germany). The sequences of all aptamers from this study are described in Supplementary Table S1.
Flow Cytometry
For the determination of equilibrium binding affinities using flow cytometry, aptamer E07 was used at different concentrations (5000, 4000, 2000, 1000, 500, 250, 100, 10, and 1 nM). After detachment with 0.2 M EDTA, 300,000 cells were incubated for 30 min with Cy5-labeled aptamers under gentle agitation to avoid cell sedimentation. Cells used as controls were incubated with Cetuximab at 1 µg/mL for 3 min, washed, and then analyzed (counting 10,000 events) using an FACSCalibur flow cytometer (Beckson Dickinson, Le Pont de Claix, France). Flowing software (version 2.5.1, Turku Bioscience, Turku, Finland) was used to analyze data. To determine the equilibrium constant, K D , experiments were repeated three times, and GraphPad Prism software (version 5.04, Dotmatics, San Diego, CA, USA) was used.
Fluorescence-Based Assays on Cell Lines
Adherent cells were plated on sterile glass coverslips for one night at 37 • C in culture medium, washed three times, and then saturated for 1 h at room temperature (RT) in selection buffer (phosphate-buffered saline, 1 mM MgCl 2 , 0.5 mM CaCl 2 ; pH 7.4) containing 2% BSA. Labeled aptamers were denatured at 95 • C for 3 min, incubated on ice for 5 min Pharmaceutics 2022, 14, 1980 5 of 18 before being resuspended in selection buffer, and applied to cells for 30 min at 37 • C. Cells were then washed in selection buffer, fixed for 8 min in 4% paraformaldehyde (PFA), permeabilized for 2 min with 0.2% Triton, and washed again. Then, immunocytochemistry was performed with the following primary antibodies: anti-EGFR (clone D1D4J; Cell Signaling Technology; 1/200) and anti-EEA1 (early endosome antigen 1; clone 14/EEA1; BD Transduction Laboratories; 1/1000). Primary antibodies were added overnight (O/N) at 4 • C, followed by two washes and incubation for 1 h at RT with a secondary antibody conjugated to Alexa 488 or 568 (Life Technologies, Carlsbad, CA, USA) at a 1 µg/mL final concentration. DAPI was added at 1 µg/mL to visualize nuclei. Washing steps were performed before mounting using fluorescent mounting medium (S3023; Dako, Carpinteria, CA, USA).
Human Tissue Samples
Twenty patients' histologic fresh-frozen, formalin-fixed, paraffin-embedded GBM tissues were obtained from the tumor collection of the pathology department of Strasbourg University Hospital (Centre de Ressources Biologiques des Hôpitaux Universitaires de Strasbourg; declaration number DC-2016-2677t) after obtaining written informed consents from patients. Twenty hematoxylin-eosin-stained paraffin-embedded human tissues, examined by one neuropathologist (B.L.), were confirmed as GBMs according to the 2021 WHO classification of tumors of the central nervous system [24]. Two human epileptic brain tissue samples were used as non-tumoral tissues. Negative controls were performed either with DAPI alone or, for immunolabeling experiments, without adding primary antibodies (i.e., only secondary antibodies were added).
Fluorescence-Based Labeling Assays on Human Tissue Samples
Apta-and immunostaining were realized using tissue sections mounted on glass slides. Paraffin-embedded sections were deparaffinized, rehydrated through a graded alcohol series, and subjected to an antigen unmasking protocol. Briefly, sections were boiled at 100 • C for 10 min in target retrieval solution at pH 9 (S2367; Dako), cooled down to RT for 20-40 min, and rinsed briefly in H 2 O; then, they were washed in selection buffer. Fresh-frozen sections were fixed in 4% PFA for 10 min at RT and then washed in selection buffer. For aptafluorescence, slides were rinsed for 5 min in H 2 O and then in blocking buffer (selection buffer, 2% BSA) in the presence or not of 100 µg/mL tRNA from baker's yeast (R56-36; Sigma-Aldrich, Hamburg, Germany) or yeast tRNA plus salmon sperm DNA (D1626; Sigma-Aldrich) for 1 h in a humid chamber at RT; they were rinsed in H 2 O, followed by selection buffer, and drained. Aptamers were denatured at 95 • C for 3 min and incubated on ice for 5 min before dilution in selection buffer to a final concentration of 1 or 2 µM for aptamer H02 targeting the α5 integrin and 500 nM for aptamer E07 targeting EGFR. Aptamers were incubated in tumor sections for 1 h on ice, briefly washed in selection buffer, drained, fixed in 4% PFA, and then washed three times in PBS. For immunofluorescence, slides were rinsed briefly in PBS, washed for 5 min in PBS-T (0.1% Tween-20 in PBS), drained, and then incubated in blocking buffer BB-I (5% goat serum in PBS, 0.1% Triton X-100) for 1 h in a humid chamber. O/N incubation with anti-integrin α5 mAb 1928 (6B8516; Millipore, Molsheim, France; 1/200) in BB-I was followed by 3 washes of 3 min in PBS-T and by an incubation step with a 1/500 dilution of a secondary antibody raised against the host species used to generate the primary antibodies, conjugated to Alexa Fluor 488 or 647 (ThermoFisher Scientific, Braunsweig, Germany; A-21245, A-11008, or A-11004) in BB-I. Immuno-and aptastaining were followed by staining with DAPI at a 1 µg/mL final concentration for 30 min at RT to visualize cell nuclei. Stained samples were then washed in PBS. Coverslips were mounted using fluorescent mounting medium (S3023; Dako).
EGFR Immunostaining of Human Tissue Samples
EGFR immunostaining was performed on deparaffinized GBM sections with Bench-Mark Ultra (Ventana, Roche, Basel, Switzerland). After pre-treatment with Protease 1 for 8 min, the monoclonal antibody clone E30 (DAKO), reactive against the extracellular domain of the EGFR protein, was used at a dilution 1/500 for 32 min. The detection ultraview DAB system was used for revelation. Negative controls omitting the primary antibody were included.
Imaging
Images of apta-and immunofluorescence were acquired using a NanoZoomer S60 digital slide scanner (Hamamatsu Photonics, Iwaka, Japan) and/or a Leica TCS SPE II confocal microscope at 20× or 63× (oil immersion) magnification. For all slide scanning, images were processed at different magnifications using NPD.view2 version 2.7.43. Mean integrated fluorescence intensity on cells and tissues was measured using ImageJ software as previously described [41,44]. The plot profile tool in ImageJ (version 1.50f, U.S. National Institutes of Health, Bethesda, MD, USA) was used to display a 2D histogram of the intensities of pixels along a line drawn within an image. The statistical analysis of data was performed with ANOVA. Data were analyzed with GraphPad Prism version 5.04 and are represented as means ± SEMs. Hematoxylin-eosin tumors were read using PathScan Viewer software.
Validation of Target Expression and Aptamer Binding to Cell Lines
We recently published the identification of aptamer H02 targeting integrin α5β1 [44]. Its affinity for GBM cell line U87MG expressing integrin α5 was determined using flow cytometry (K D = 277.8 ± 51.8 nM; Table 1). Using confocal imaging, we showed that this aptamer was able to discriminate among ten GBM cell lines expressing high and low levels of integrin α5. Similarly, in the present study, we first characterized the binding parameters of aptamer E07 targeting EGFR [45] in GBM cells. Immunoblots showed that EGFR was expressed in U87 EGFR WT cells but was absent in LN319 (Figure 2A,B). EGFR detection by means of flow cytometry in both cell lines was controlled using anti-EGFR antibody Cetuximab conjugated to Cy5 ( Figure 2C, left). The shift in fluorescence intensity to the left confirms the low expression level of EGFR in LN319 compared with the U87 EGFR WT cell line. This difference in fluorescence intensity was also observed for the binding of Cy5-conjugated aptamer E07, named E07-Cy5 ( Figure 2C, right). The equilibrium affinity parameter, K D , of the interaction between E07-Cy5 and U87 EGFR WT cells was determined using flow cytometry ( Figure 2D). Briefly, binding events associated with the fluorescence signal of different concentrations of aptamers, ranging from 1 nM to 5 µM, to a constant number of cells were measured. A K D of 208.7 ± 45.6 nM was determined by plotting the mean fluorescence of U87 EGFR WT cells against the concentration of the E07 aptamer ( Figure 2D, Table 1). For confocal assays, confluent cells were stained with E07-Cy5 at 100 nM for 30 min. After cell fixation, cells were immunolabeled with an anti-EGFR primary antibody and then with a secondary antibody labeled with Alexa 568. The specificity of the E07-Cy5 aptamer was characterized on the two GBM cell lines, U87 EGFR WT and LN319, expressing high and low levels of (Figure 2A,B). Confocal imaging shows that aptamer E07 detected EGFR on U87 EGFR WT ( Figure 2E) and to a lesser extent on MDA-MB-231 cells ( Figure S1). Clearly, EGFR aptalabeling corresponded with EGFR immunolabeling and reflected well the EGFR expression level in these cell lines. Fluorescently labeled aptamer E07 was not detected in the cell lines that did not express EGFR (LN319 and MCF7).
two GBM cell lines, U87 EGFR WT and LN319, expressing high and low levels of EGFR respectively ( Figure 2E). Confocal imaging was also performed on other cell lines: breas cancer cell lines MCF-7 and MDA-MB-231 ( Figure S1). MDA-MB-231 expressed an inter mediate level of EGFR, whereas EGFR was not immunodetected in MCF7 (Figure 2A,B) Confocal imaging shows that aptamer E07 detected EGFR on U87 EGFR WT ( Figure 2E) and to a lesser extent on MDA-MB-231 cells ( Figure S1). Clearly, EGFR aptalabeling corresponded with EGFR immunolabeling and reflected well the EGFR expression level in these cell lines. Fluorescently labeled aptamer E07 was not detected in the cell lines that did not express EGFR (LN319 and MCF7).
On the basis of their specific cell-binding properties to their respective receptors, we considered the two aptamers, H02 and E07, suitable for integrin α5β1 and EGFR detection in human GBM tissues. E07. Different concentrations of the E07-Cy5 aptamer (0.001, 0.01, 0.1, 0.25, 0.5, 1, 2, 4, and 5 µM) were incubated with a constant number of U87 EGFR WT GBM cells and analyzed using flow cytometry. Titration resulted in the determination of the equilibrium affinity parameter, K D , for the interaction between U87 EGFR WT cells and aptamer E07 (208.7 ± 45.57 nM). (E) Confocal imaging of E07-Cy5 aptamer in two cell lines, LN319 and U87 EGFR WT. Cells were seeded in coverslips and incubated with 100 nM of E07-Cy5 aptamer for 30 min (white). The incubation of antibody anti-EGFR was followed by incubation with a secondary antibody labeled with Alexa 568 (represented in red). Nuclei were stained with DAPI (blue). Scale bar = 10 µm.
On the basis of their specific cell-binding properties to their respective receptors, we considered the two aptamers, H02 and E07, suitable for integrin α5β1 and EGFR detection in human GBM tissues.
Apta-and Immunodetection of Integrin α5β1 in Paraffin-Embedded and Frozen Glioblastoma Sections
We investigated whether the conditioning of the tumor sections had an influence on aptalabeling using 20 tumor sections from GBM patients. Formalin-fixed paraffin-embedded (FFPE) sections were deparaffinized, rehydrated, and subjected to an antigen unmasking protocol. Fresh-frozen sections were fixed in 4% paraformaldehyde. Aptafluorescence and, for comparison, immunofluorescence experiments were performed to detect integrin α5β1 using the cyanine 5-conjugated H02 aptamer, named H02-Cy5, at 2 µM and antiintegrin α5 mAb 1928 followed by a secondary antibody coupled to Alexa 647. mAb 1928 was recently used to detect integrin α5 via the immunostaining of GBM-PDX and FFPE tissues [41,44]. Nuclei stained with DAPI allowed us to select several fields per tumor section with homogeneous tissue distribution for quantification. The integrin α5β1 protein expression level was quantified in each sample using the mean fluorescence intensity (MFI) as recently described using confocal imaging for aptahistofluorescence (AHF) [44] and for immunohistofluorescence (IHF) [41]. IHF showed similar results for FFPE and frozen tissue sections. Similar results were also obtained via IHF and AHF for FFPE sections ( Figure 3A). These results highlight a good reproducibility of IHF regardless of tumor section conditioning. They also emphasize the ability of aptamer H02 to detect integrin α5β1 in human FFPE GBM sections. However, the AHF intensities of frozen sections were too low for the detection of integrin α5β1 with aptamer H02 and to be compared with data on FFPE sections ( Figure 3A). In the subsequent phases of this study, only FFPE sections were further studied.
Detection of Integrin α5β1 Using Apta-and Immunohistofluorescence on FFPE GBM Sections Highlighted Inter-Tumoral Heterogeneity
A recent analysis of integrin α5 expression revealed its upregulation as a negative prognostic biomarker of GBM; the analysis was part of a study of the relationship between patient outcome and α5 protein expression levels in a cohort of 95 FFPE GBM sections using IHF [41]. To define the cut-off threshold allowing one to distinguish two groups characterized by low and high integrin α5 expression levels, the median of the MFI (MMFI) was used. In this present study, the same method was applied to compare AHF and IHF on 20 FFPE GBM sections, different from [41]. The distribution of data is shown in Figure 3B, and representative images of sub-populations with IHF and AHF are shown in Figure 3C. Two groups are clearly distinguished, both via IHF and AHF. Moreover, the values of the ratio of high versus low MMFI were similar for IHF (1.8) and AHF (1.6) and matched the value of 1.5 obtained by Etienne-Selloum et al. [41]. The GBM inter-tumoral heterogeneity illustrated by these results is just as likely to be shown with antibody 1928 via IHF or aptamer H02 via AHF. These results demonstrate that imaging and quantifying inter-patient heterogeneity based on integrin α5β1 detection is similarly achievable in FFPE GBM sections, using either an antibody or an aptamer.
Detection of Integrin α5β1 Using Apta-and Immunohistofluorescence on FFPE GBM Sections Highlighted Inter-Tumoral Heterogeneity
A recent analysis of integrin α5 expression revealed its upregulation as a negative prognostic biomarker of GBM; the analysis was part of a study of the relationship between patient outcome and α5 protein expression levels in a cohort of 95 FFPE GBM sections using IHF [41]. To define the cut-off threshold allowing one to distinguish two groups characterized by low and high integrin α5 expression levels, the median of the MFI (MMFI) was used. In this present study, the same method was applied to compare AHF and IHF on 20 FFPE GBM sections, different from [41]. The distribution of data is shown in Figure 3B, and representative images of sub-populations with IHF and AHF are shown in Figure 3C. Two groups are clearly distinguished, both via IHF and AHF. Moreover, the values of the ratio of high versus low MMFI were similar for IHF (1.8) and AHF (1.6) and matched the value of 1.5 obtained by Etienne-Selloum et al. [41]. The GBM inter-tumoral heterogeneity illustrated by these results is just as likely to be shown with antibody 1928 via IHF or aptamer H02 via AHF. These results demonstrate that imaging and quantifying inter-patient heterogeneity based on integrin α5β1 detection is similarly achievable in FFPE GBM sections, using either an antibody or an aptamer. ). Statistical analyses were performed with Student's t test (**** p < 0.0001; ns, not significant). (C) Representative images of low and high integrin α5 expression staining via IHF and AHF are represented (magnification × 40). The drawings on the left (not to scale) symbolize the detection in tumor sections using IHF (as an indirect method of detection, with Ab 1928 and a fluorophore-conjugated secondary antibody) and AHF (as a direct detection method, with fluorophore-coupled aptamer H02). Integrin α5 labeling is represented in red. Nuclei were stained with DAPI (blue). Scale bar = 50 µm.
Aptahistofluorescence to Highlight Intra-Tumoral Heterogeneity
Because of it being a likely major cause of treatment resistance, we then assessed whether intra-tumoral GBM heterogeneity could be detected separately using H02 and E07 aptamers, both of them conjugated to Cyanine 5. The data obtained with aptamers were compared to immunological detection in FFPE tumor sections.
Equally scaled images taken with a Nanozoomer S60 slide scanner showed a very similar staining pattern via AHF with the H02-Cy5 aptamer and via IHF with mAb 1928, followed by a secondary antibody conjugated to Alexa 647. Figure 4A shows two sections of the same tumor slice. Two areas could be identified, with a small and a larger number of cells on the left and on the right of the images, respectively, showing invading cells in the lengthwise central part. A blood vessel was visible in the right median area. As with mAb 1928, aptamer H02 allowed us to distinguish tumoral cells at the tumoral core, invading cells at the invasion border, and the edges of a blood vessel. Integrin α5β1 is indeed expressed by tumoral vessels besides its expression by GBM tumoral cells [46]. Light microscopy with H&E staining of the same area is shown in Figure S2. The comparable staining patterns using IHF and AHF further supported the specificity of aptamer H02 labeling. Furthermore, the representative image in Figure 4B shows mosaic protein expression, with cells detected by aptamer H02 and with others that were not. These AHF experiments, therefore, enabled the detection of α5+ and α5− cells within the same tumor sections, which, to our knowledge, had never been imaged. (A) Comparison of IHF and AHF for the detection of integrin α5. Equally scaled images taken with a Nanozoomer S60 slide scanner of two adjacent sections of the same tumor allowed us to perform a direct comparison between the fluorescence patterns of cells stained using IHF with antibody 1928 (Ab1928) and an Alexa647-conjugated secondary antibody and using AHF with Cyanine5-conjugated aptamer H02 (AptH02). Detection of integrin α5 is represented in white. DAPI staining is shown in blue. The dotted line delimits two areas with a small and a large number of cells on the left and right sides of the images, respectively. Another representation showing the number of cells in the two areas is provided in Figure S3. Scale bar = 100 µm. The light microscopy result of an adjacent section is shown in Figure S2. (B) Detection of integrin α5 using AHF. This area further shows in more detail two zones delimited by a dotted line: no or very low integrin α5 on the left side and integrin-α5 positive cells on the right side. Magnified images are from the insert, either in single-channel mode or in merged-channel mode. Integrin α5 was detected with Cyanine5-conju- Figure S3. Scale bar = 100 µm. The light microscopy result of an adjacent section is shown in Figure S2. (B) Detection of integrin α5 using AHF. This area further shows in more detail two zones delimited by a dotted line: no or very low integrin α5 on the left side and integrin-α5 positive cells on the right side. Magnified images are from the insert, either in single-channel mode or in merged-channel mode. Integrin α5 was detected with Cyanine5-conjugated aptamer H02 (AptH02), represented in white. DAPI staining is represented in blue. The orange and yellow squares show cells unlabeled and labeled with aptamer H02, respectively. Scale bar = 50 µm. (C,D) Comparison of AHF (first three images) and immunohistochemistry (image on the right side) for the detection of EGFR. The same zone of the same tumor, identified in non-adjacent sections via fluorescence and light microscopy images, shows similar profiles for EGFR aptamer and antibody staining. Detection was realized using AHF with Cyanine5-conjugated aptamer E07 (AptE07; in white), and nuclei were stained with DAPI (in blue) and using immunohistochemistry with antibody E30 (AbE30) and a horseradish-peroxidase-conjugated secondary antibody. Scale bar = 200 µm. Images in (D) show two areas with high (noted with H) and low cell density.
We also compared EGFR apta-and immunodetection with the E07-Cy5 aptamer or with antibody clone E30 and a horseradish-peroxidase-conjugated secondary antibody. The anti-EGFR antibody and methodology were those used in clinics for EGFR in vitro diagnostic. As far as we know, aptamer E07 has never been reported to detect EGFR in ex vivo experiments. Both the E07 aptamer and the E30 antibody are known to detect the extracellular domain of EGFR proteins [45,47,48]. Corresponding areas from the same tumor showed similar profiles for EGFR aptamer and antibody staining using fluorescence and light microscopy of the tumoral core ( Figure 4C) and invasive border ( Figure 4D).
The detection profiles of integrin α5β1 and EGFR were similar using aptamers and antibodies and revealed that the expression of these two proteins was not homogeneous within tumor sections. The two aptamers used in this study were as effective as specific antibodies in demonstrating the heterogeneous staining pattern within the tumor. We, thus, validated the use of aptamers in aptafluorescence for the detection of two molecular biomarkers and to highlight tumoral heterogeneity in FFPE GBM sections.
Multiplexing with Aptamers with Different Specificities
Since we demonstrated that aptamers H02 and E07 were separately able to detect integrin α5β1 and EGFR, we proposed their simultaneous use in the same tissue sections. In these multiplexing experiments, aptamer H02 was conjugated to cyanine 5 and aptamer E07 to Alexa 488 ( Figure 5A). To avoid potential hybridization between them, aptamers H02 and E07 were heat-denatured at 95 • C and renatured separately; then, they were pooled shortly before their application to tissue sections.
Representative images of epileptic brain and GBM tissues are shown in Figure 5B,C, respectively, and the analyses of fluorescence intensities are quantified in Figure 5D,E. While E07 and H02 aptamers did not label non-tumoral tissues ( Figure 5B,D), they were efficient in detecting cells expressing EGFR and integrin α5β1 within the tumor. Figure 5C,E are of particular interest. Two different patterns were observed. (i) In most areas, all cells were labeled with the two aptamers. This result highlighted, using bioimaging, the already known co-expression and potential crosstalk between EGFR and integrin α5β1 in GBM [32].
(ii) However, in some areas, such as the one shown with the gray arrow in Figure 5C,E, one could note a lower fluorescence intensity obtained with the E07 aptamer than in the side areas, which highlighted that dual apta-labeling was not identical among cells within the tumor. This indicated a differentiated expression of both receptors, i.e., equal levels of integrin α5β1 but lower levels for EGFR in this zone compared with adjacent areas.
Hence, these results showed not only areas of co-expression of EGFR and integrin α5β1 but also areas where one of these two biomarkers was underexpressed compared with the other, and this was made possible in patient tumor sections using multiplex aptamer detection. Figure 5. Dual labeling with aptamers targeting integrin α5 and EGFR. (A) Schematic depicting detection via AHF simultaneously using two aptamers, aptamers E07 and H02, conjugated to two different fluorophores (not to scale). In (B,C), we show human epileptic brain and GBM tissues, respectively. DAPI staining is shown in blue. Detection of EGFR with Alexa 488-conjugated aptamer E07 is represented in green. Detection of integrin α5 with Cyanine5-conjugated aptamer H02 is represented in gray. Images in (B,C) were captured using the same settings to allow us to perform a direct comparison of the staining intensity with a Nanozoomer S60 slide scanner. Scale bar = 100 µm. (D,E) Histograms of normalized fluorescence intensities corresponding to detection with aptamers E07 (in green) and H02 (in gray). Histograms in (D,E) correspond to the fluorescence intensities of B and C, respectively, quantified along the orange diagonal arrow. Histograms show only sparse fluorescence in epileptic tissue (D); they show, in GBM tissue (E), that areas were not uniformly labeled with both aptamers. For example, the gray arrow in (E) shows an area strongly and faintly labeled with aptamers H02 and E07, respectively. This area corresponds to the cells pointed at by the gray arrow in (C).
Discussion
Tumoral heterogeneity, which encompasses both inter-tumoral heterogeneity (differences observed at the population level) and intra-tumoral heterogeneity (differences among cells within individual tumors), affects treatment response. It is the key to understand treatment failure, notably in GBM, where multiple distinct populations of tumoral cells confer survival advantage as well as resistance to therapies and for which drug treatment remains largely inefficient. Technical advances have helped to reveal GBM heterogeneity at the DNA and RNA levels. However, as gene expression data do not often highly correlate with variations in protein expression, reliable and easily implementable methods are needed to identify molecular targets at the protein level [49]. A large amount of information is missing in histology due to methodological and tool limitations. Though essential for a better understanding of pathological processes and for the development of personalized therapeutic strategies, the simultaneous detection of multiple biomarkers is not systematically studied [50]. The detection of multiple proteins in IHC, the standard method for the in situ detection of FFPE tissue, is performed on consecutive sections. The localization of different biomarkers is particularly difficult when sections are not successive, and the co-localization of markers cannot be assessed at the level of the single cell [3]. Moreover, antibodies, used for the last 40 years, have been proven to be at times unreliable, mainly due to reagent variations [9]. High-quality, reliable molecules are essential for detection, and a transition towards affinity molecules defined by their sequence has recently been proposed [51,52]. For histofluorescence multiplexing approaches, aptamers appear to be particularly suitable. Due to their smaller size compared with antibodies, they can better penetrate in tissues [12]. Aptamers are chemically synthetized, which means that they do not vary from batch to batch. Fluorophores can easily be directly conjugated to aptamers, and these constructs are detected in multiplexing fluorescent experiments when aptamers with different specificities are conjugated to different fluorophores. The AHF technique is fast and easy to implement, and our results highlight its use to detect GBM heterogeneity in FFPE tissue samples. However, a number of considerations must be taken into account to avoid the misinterpretation of the histological data.
A very recent comparative analysis of cell-surface-targeting aptamers indicated that the characterization of many of these molecules was largely confounded by a lack of uniform assessment. Kelly et al. [53] compared the ability of 15 different aptamers from the literature and surveyed them particularly for their in vitro cell-binding capacities. The targets included PSMA, EGFR, hTfR, HER2, AXL, EpCAM, and PTK7. Only 5 out of the 15 aptamers showed receptor-specific activity, and among these five aptamers was aptamer E07, which supported the selection of this aptamer in our experiments. As in this study, we considered the use of well-documented aptamers to be important, particularly those studied for their binding to identified biomarkers on cells, to have a better chance to find them to be suitable for histological detection. Aptamers are identified through an in vitro evolution process called SELEX, which stands for 'Systematic Evolution of Ligands by EXponential Enrichment' [54,55]. It starts with an initial RNA or ssDNA library containing 10 14 -10 15 oligonucleotides and involves iterative cycles of selection towards targets, including small molecules, proteins, peptides, toxins, whole cells, and tissues. Different SELEX processes have been developed for the selection of aptamers targeting tumor biomarkers, with the two main ones being protein-and cell-SELEX [56]. Another selection method allows one to identify aptamers on tissues, called tissue-SELEX. This method is the best suited for further applications of selected aptamers in histology. However, the a posteriori identification of molecular targets has rarely been performed [18,57] and is difficult to achieve. In our study, we, therefore, chose aptamers already well characterized in the literature for their cell-binding properties, namely, aptamers E07 and H02. Moreover, upstream of histofluorescence, we supplemented published data with cytofluorescence experiments using flow cytometry and confocal imaging. We used appropriate receptorexpressing GBM cells and included negative cells for receptor expression (Figure 2). The affinities of aptamers for their targets were determined under conditions that were as close as possible to 'natural' conditions (i.e., affinities for cells). We showed that K D of aptamer H02 differed 3.8-fold in the interactions aptamer-recombinant integrin α5β1 and aptamer-cell [44]. This difference was much higher for aptamer E07, as a very high binding affinity (2.4 ± 0.2 nM) was determined for the interaction between [α-32P]-ATP-labeled aptamer E07 and the recombinant human EGFR protein using filter binding assays [45], while much lower affinities were determined for the interaction between aptamer E07 and the U87 EGFR WT cell line (Table 1; 208.7 ± 45.6 nM) or EGFR-expressing pancreatic cells (26-67 nM [48]). These differences may have certainly been due to the different techniques used, but they may have also been due to the differences in the conformations of soluble recombinant proteins and cell-surface proteins, to the functional bioavailability of receptors in a cellular context, and thus to the different SELEX process used for aptamer identification, i.e., hybrid-SELEX, composed of cell-and protein-SELEXs, and protein-SELEX, for the identification of aptamers H02 [44] and E07 [45], respectively. Nevertheless, the cellular affinities determined in our study were of the same order of magnitude as those reported in the literature for the interaction of most aptamers targeting cell-surface receptors [56].
Then, since aptamers, similarly to antibodies, might recognize epitopes on cells and not on FFPE tissues, immunolabeling was conducted alongside aptahistofluorescence with antibodies and aptamers with the same specificities (Figures 3 and 4). An indirect method was used for immunolabeling, which consisted of the successive incubation of anti-α5 or anti-EGFR antibodies followed by secondary antibodies. AHF is a direct method, as aptamers are directly conjugated to fluorophores; it is, therefore, faster than IHC. The binding intensities determined using AHF correlated with the localization of EGFR and integrin α5β1 detected using immunolabeling. Moreover, the labeling of GBM tissues with aptamer H02 targeting integrin α5β1 confirmed the results previously obtained with anti-integrin α5β1 antibody 1928 [41], highlighting inter-patient heterogeneity. In our study, we did not observe the superior staining of a single aptamer compared with primary antibody staining, as recently described by Gomes de Castro et al. using super-resolution microscopy [58], but rather similar staining for cell receptors was detected with aptamers in comparison with antibodies using confocal imaging and a digital slide scanner. Within the same GBM section, by means of AHF using H02, we observed intra-tumoral heterogeneity, showing that different regions of the same tumor contained cells with different protein expression levels. Different areas were observed: (i) some very intensely labeled in the tumoral core and in perivascular areas and (ii) others with less labeling in the tumor periphery, where invading cells could be detected, (iii) but also areas with cells that did not express integrin α5β1.
Last but not least, the issue of autofluorescence must be considered before performing AHF and/or IHF experiments on tissues, as it complicates the data analyses. The natural fluorescence of red blood cells occurs at several wavelengths, so the distinction between test fluorescence and endogenous fluorescence is difficult [59]. Areas and at times even whole tumor sections that were highly necrotic could not be analyzed in AHF and IHF with fluorescent reporters that absorbed light at wavelengths below 600 nm. Practically, classical controls were performed; these consisted of the analysis of slices stained with DAPI alone or without the addition of primary antibodies for immunolabeling experiments and imaged with three filters. In addition, for EGFR and integrin α5β1 detection, we performed experiments with secondary antibodies and aptamers, both conjugated to Cyanine 5 or Alexa 647, as autofluorescence was absent, with far-red-emitting dyes (optical windows above 600 nm, as recommended [59]). Thus the selectiveness of the aptamers could be analyzed and compared to that of the antibodies in adjacent slices. For multiplexing experiments, to simultaneously detect integrin α5β1 and EGFR in the same slice, we used aptamer H02 conjugated to Cyanine 5 and aptamer E07 conjugated to Alexa 488, respectively. Hence, the use of the E07 aptamer conjugated to cyanine 5 or Alexa 488 allowed the data to be compared, thus invalidating areas with autofluorescence.
A few studies describe aptamers for multiplexing experiments. For example, the seminal paper by Dr. Zu and his team showed the combination of an aptamer targeting CD4 and antibodies to phenotype cells from lymph nodes, bone marrow, and pleural fluid [60]. However, to our knowledge, only one other multiplexing study simultaneously combining two or more aptamers on pathological human solid tissue has been carried out so far. Zamay and collaborators identified three DNA aptamers to post-operative lung carcinoma tissues [61], described their use in AHC for tumoral tissue characterization, and proposed that a pair of aptamers able to bind to tumor stroma be used for tumor intraoperative visualization [18]. In our study, having ensured that H02 and E07 aptamers could detect integrin α5β1 and EGFR, respectively, on cells and tissues, having compared their tissue detection efficiency to that of antibodies specific to integrin α5β1 and EGFR, and having checked their tissue binding profile when coupled to different fluorophores, we finally evaluated them in multiplexing experiments. The multi-detection experiments consisted in simultaneously labeling the two biomarkers, integrin α5β1 and EGFR, with the two aptamers, H02 and E07, covalently conjugated to two different fluorophores emitting at different and non-overlapping wavelengths (Alexa 488 for E07 and Cyanine 5 for H02). In practice, the aptamers were heated and then cooled separately to avoid inter-aptamer pairing; then, they were mixed and deposited on the GBM sections. Our results on human GBM tumoral tissues showed two different profiles: homogeneous or heterogeneous staining ( Figure 5). The labeling of cells with both H02 and E07 aptamers suggested that they expressed both integrin α5β1 and EGFR. Other tumor areas showed a less uniform pattern, with one of the two biomarkers being underexpressed.
Our data indicated that AHF was as sensitive as immunodetection and could be used to simultaneously detect biomarkers in the same tumor section and to reveal the spatial proximity between them. This study showed for the first time the application of fluorescent aptamers in multiplexing imaging experiments to label two biomarkers in human GBM tissues. These results confirmed functional results establishing a cross-talk between integrins and EGFR in several tumors, including gliomas [32,62], and raised the possibility that for EGFR-and integrin α5β1-positive patients, combined therapies based on the dual inhibition of both receptors might be of interest.
Conclusions
Though the road to using aptamers for the measurement of biomarker expression in tumors is still long, as only a few studies on aptamers have been conducted, our results confirm that aptamers could be alternative molecular probes for histology. Their unique properties would offer advantages in clinics over antibodies, such as shorter reaction time, identical or higher labeling properties, no cross-immunoreactivity issues, and far from being the least, the possibility of easy multiplex analyses, without stripping, of the same section, thus also reducing the need for valuable precious materials such as those from rare donors. We demonstrated the application value of AHF in the detection of integrin α5β1 and EGFR, two biomarkers with wide-ranging cooperation in GBM. We believe that aptamers might have a role to play in multiplexing experiments either using multiple aptamers or through combinations of aptamers/antibodies for the detection of different biomarkers, as alternatives to classical IHC for tumor diagnosis, representing a step towards the multiparameter analysis of whole section tissues.
Supplementary Materials:
The following supporting information can be downloaded at: https:// www.mdpi.com/article/10.3390/pharmaceutics14101980/s1, Table S1: Information on aptamers used in this study, Figure S1: Detection of EGFR using IHF and AHF in MCF7 and MDA-MB-231 cells, Figure S2: Light microscopy with H&E dye of a section adjacent to that shown in Figure 4, Figure S3: Surface plot showing the intensity profile of cells represented in Figure 4A. | 2022-09-23T15:15:58.727Z | 2022-09-20T00:00:00.000 | {
"year": 2022,
"sha1": "7914bf60eb5fd4940c0b4155410c7dc8518eaf77",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1999-4923/14/10/1980/pdf?version=1664536418",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "e702ea1eba75ed68b0771aae3a2ef5e8a58af8ba",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": []
} |
33040234 | pes2o/s2orc | v3-fos-license | Conditional Deletion of PDK1 in the Forebrain Causes Neuron Loss and Increased Apoptosis during Cortical Development
Decreased expression but increased activity of PDK1 has been observed in neurodegenerative disease. To study in vivo function of PDK1 in neuron survival during cortical development, we generate forebrain-specific PDK1 conditional knockout (cKO) mice. We demonstrate that PDK1 cKO mice display striking neuron loss and increased apoptosis. We report that PDK1 cKO mice exhibit deficits on several behavioral tasks. Moreover, PDK1 cKO mice show decreased activities for Akt and mTOR. These results highlight an essential role of endogenous PDK1 in the maintenance of neuronal survival during cortical development.
Due to embryonic lethal effect caused by PDK1 knockout in the whole body, it has been impossible to use PDK1 −/− mice to study in vivo functions of PDK1 in the postnatal cerebral cortex. The advantage of viable cell-type specific PDK1 conditional knockout (cKO) mice has helped solve this problem. Recent work showed that conditional deletion of PDK1 through GFAP-Cre mediated gene recombination causes microcephaly in mice, indicating a critical role of PDK1 in brain development (Chalhoub et al., 2009). A conditional knock-in mouse model expressing the PDK1 L155E mutation displays microcephaly as well (Cordon-Barris et al., 2016). In addition, it has been demonstrated that PDK1 in neural progenitor cells (NPCs) is important for the generation of oligodendrocyte precursor cells (Watatani et al., 2012) and neuronal migration (Itoh et al., 2016). The work on epidermis-specific PDK1 cKO mice has revealed an essential role of PDK1 in asymmetric cell division (Dainichi et al., 2016).
Recent evidence has shown that PDK1 is involved in neurodegenerative disease, but how it exerts its role is controversial. On one hand, reduced PDK1 levels (Liu et al., 2011) and impaired PI3K signaling (Steen et al., 2005;Talbot et al., 2012) were found in Alzheimer's disease (AD) brain, suggesting a loss-of-function manner. On the other hand, another group reported that PDK1 activity was increased in AD brain, and that PDK1 inhibitor enhanced α-secretase activity and reduced amyloid plaques in APP transgenic (Tg) mouse models of AD (Pietri et al., 2013), suggesting a gain-of-function mechanism. Kharebava et al. (2008) have recently demonstrated that over-expression of PDK1 reduces trophic deprivation (TD)induced apoptosis, and that RSK1/2 is required for PDK1mediated neuroprotection. However, it remained uninvestigated whether endogenous PDK1 plays a critical role in neuronal survival during cortical development. We aimed to address this question in the present study. Here, we have crossed PDK1 f /f mice with a forebrain neuron-specific Cre Tg line (Gorski et al., 2002) to generate PDK1 cKO (PDK1 f /f ;Emx1-Cre) animals, in which PDK1 is inactivated in excitatory neurons of the cortex. We show that PDK1 levels were significantly reduced in the cortex of PDK1 mutant mice. We report that PDK1 cKO mice display striking neuron loss, abnormal apoptosis, and severe memory deficit. We find that PDK1 cKO mice exhibit decreased Akt/mTOR activities. These findings highlight a protective role of endogenous PDK1 during brain development.
MATERIALS AND METHODS
Animal Care and Use PDK1 cKO mice were generated by crossing floxed PDK1 f /f mice (Feng et al., 2010) with Emx1-Cre Tg (Gorski et al., 2002). Emx1-Cre and mTmG mice (Muzumdar et al., 2007) were purchased from the Jackson Laboratory (Bar Harbor, ME, United States). To generate PDK1 cKO mice, we crossed homozygous PDK1 f /f with Emx1-Cre to obtain PDK1 f /+ ;Emx1-Cre mice, which were then bred with PDK1 f /f to get PDK1 f /f ;Emx1-Cre (PDK1 cKO). PDK1 f /+ ;Emx1-Cre mice grow normally and their brain morphology did not differ from that of PDK1 f /+ or PDK1 f /f mice in our analyses. Therefore, PDK1 f /+ ;Emx1-Cre and PDK1 f /f served as littermate controls to PDK1 cKOs. To detect the floxed PDK1 allele, the following primers were used. The forward primer is TGTGCTTGGTGGATATTGAT and the reverse primer is AAGGAGGAGAGGAGGAATGT.
The mice were kept on 7:00AM-19:00PM light cycle under conditions of constant humidity and temperature (25 ± 1 • C). The mice were group-housed (4-5 per cage) throughout the experimental period and had ad libitum access to food and water. Both male and female mice were used in this study.
Behavioral experiments were conducted during the light phase of the cycle (8:00AM-18:00PM). Different cohorts of mice were used for behavioral and biochemical experiments. The mice were bred and maintained in an SPF level of animal room in the core facility of the Model Animal Research Center (MARC) at Nanjing University. The genetic background of the mice used in this study was C57BL/6. Mouse breeding was conducted under IACUC approved protocols at the MARC. All the experiments were performed in accordance with the Guide for the Care and Use of Laboratory Animals of the MARC at Nanjing University.
Brain Lysate Preparation
Mice were euthanized by CO 2 at 3 weeks or 3 months of age. Tissues from various brain areas were quickly collected and then placed into liquid nitrogen. Cortical samples were homogenized in cold RIPA (radio immunoprecipitation assay) lysis buffer [consisting of the following (in mM): 20 mM Tris-HCl, pH 7.4, 150 mM NaCl, 1 mM EDTA, 1% NP-40, 0.5% sodium deoxycholate, and 0.1% SDS] containing protease and phosphatase inhibitors (Thermo). Lysates were cleared by centrifugation (14,000 rpm for 15 min). Samples were stored at −80 • C until use. Protein concentration was analyzed using a standard BSA method.
Paraffin Brain Blocks and Nissl Staining
The mouse at 3 weeks or 3 months was placed into a CO 2 box for about 10 s so that its respiration became quite faint and its heartbeat got weak. Immediately after the CO 2 treatment, cardiac perfusion was conducted using 15 ml 4% paraformaldehyde (PFA) solution (in phosphate buffer saline, PBS) in a wellventilated hood. The brain was dissected out and then fixed in PFA overnight. After fixation, the brain was dehydrated and embedded in paraffin.
After embedding, several paraffin brain blocks were prepared. In each block, 4 hemi-brains, including 2 controls and 2 cKOs, were placed together and were sectioned sagittally (10 µm) using a microtome (Leica Microsystems, Bannockburn, IL, United States). This way allowed 4 brain sections (2 from the control and 2 from the cKO group), which were on identical stereotaxic plane, to be placed on the same slide. For embryos, the head was fixed in 4% PFA overnight and was later embedded in paraffin. Coronal brain sections for each embryo were collected individually.
Sections were incubated at 58 • C for 1 h, deparaffinized in xylene and re-hydrated. They were rinsed in PBS for 5 min, soaked in 0.5% cresyl violet for 12 min, and then dehydrated by a series of ethanol (70, 90, 95, and 100%). After the sections were cleared in xylene, they were coverslipped with neutral resin (Sinopharm Chemical Reagent Co., Ltd., Shanghai).
The Cortex Volume, Neuron Counting, and Cell Density A method described by us previously (Tabuchi et al., 2009;Chen et al., 2010) was used to measure the cortex volume and to count the total number of neurons. It was based on an unbiased stereological neuron counting technique (West and Gundersen, 1990).
The following experiments were conducted to measure the cortex volume for each hemi-brain. First, for each paraffin block, a total of 8 brain slides spaced 400 µm apart were selected for Nissl staining. Since each paraffin block contained 2 control and 2 cKO hemi-brains, there were 4 brain sections on each slide. The section thickness was 10 µm. Second, Nissl-stained images were captured and the area for the cortex in each brain section was measured using the Olympus CellSens Standard system. For each mouse, areas for the cortex from 8 (control) or 6 (cKO) sections were averaged to obtain the mean value across sections. Third, the volume of a hemi-cortex was calculated using the following formula: volume = the mean area × the total thickness of a hemi-brain. The latter was 3200 µm for control but 2400 µm for cKO.
The following experiments were performed to calculate the total number of cortical NeuN+ cells for each hemi-brain. First, NeuN immunohistochemistry (IHC) was conducted using 8 slides spaced 400 µm apart from each paraffin brain block. Second, for each NeuN-stained section, a total of 10 microscopic fields were randomly selected under the 40× magnification lens of an Olympus BX53 microscope. Each microscopic field was 100 µm × 100 µm × 10 µm (10 5 µm 3 ) in size and was defined as a counting unit. Third, the total number of NeuN+ cells in each counting unit was counted. The numbers were then averaged across sections to obtain a mean value for NeuN+ cell number per unit. The cell density was defined as the number of NeuN+ cells in a 1 mm 3 area. It was calculated using the following formula: density = mean number of NeuN+ cell/mm 3 . Fourth, the total number of cortical NeuN+ cells in each hemi-brain was calculated using the following formula: total number = cell density × the cortex volume.
For the measurement of the diameter of cell bodies of NeuN+ cells, a total of 345 cells in the NeuN-stained images for PDK1 cKO at 3 weeks and 365 cells in PDK1 cKO at 3 months were randomly selected to calculate the averaged value (n = 3 mice/group/age).
BrdU Labeling
BrdU (B5002, Sigma-Aldrich) was administered to pregnant dams at the concentration of 100 mg/kg. To label proliferating NPCs, BrdU was intraperitoneally injected to pregnant dams at E13.5, E15.5, and E17.5. Embryonic brains were collected 30 min after the injection and were then processed for paraffin embedding. Each paraffin block contained only one embryonic brain, and serial coronal sections were prepared using a microtome.
For BrdU+ cell counting, three coronal sections spaced 200 µm apart from each embryo were used. Images for cortical BrdU staining were captured under the 40× objective lens of a Leica confocal laser scanning microscope. In each image, 2 counting units were randomly selected and each unit was an area of 100 µm (surface of the ventricular zone, VZ) × 200 µm (vertical to the VZ surface). The total number of BrdU+ cells in each unit was counted. For each embryo, the number of BrdU+ cells was averaged across 6 counting units to make the mean number.
For PH3+ cell counting, three coronal sections spaced 200 µm apart from each embryo were used. Images were captured under the 10× objective lens of a Leica confocal laser scanning microscope. The total number of PH3+ cells on the surface of VZ was counted. The number was then averaged across 3 sections to obtain the mean number of PH3+ cells for each embryo.
TUNEL Staining
Brain sections were blocked with 5% goat serum for 30 min and then treated with the TUNEL (terminal deoxynucleotidyl transferase-mediated dUTP-biotin nick end-labeling) BrightGreen Apoptosis Detection Kit (Vazyme) at 37 • C for 1 h (Tabuchi et al., 2009). The sections were washed using TBS (Tris-buffered saline) for three times and then scanned using a Leica confocal laser scanning microscope.
Analysis of Dendritic Length
Dendrites for neurons in cortical layer V and hippocampal CA1 were analyzed. Ctip2 antibody was used to label pyramidal neurons in the above brain regions. Double immunostaining on Ctip2/MAP2 was conducted using brain sections at 3 weeks and 3 months. After the FIHC experiments, images doubly positive for Ctip2/MAP2 were examined and observed under the 20× objective lens of a ZEISS LSM 880 confocal laser scanning microscope. By this way, we could identify Ctip2+ cells displaying long MAP2+ dendrites in cortical layer V. For each mouse, we examined 2-3 brain sections to trace 10 cortical Ctip2+ cells which sent long and unbroken MAP2+ dendrites to cortical layer I/II. In hippocampal CA1, apical dendrites of Ctip2+ cells were quite long and it was relatively easy to get measured. Images were captured under the 10× objective lens and then processed by ImageJ 1 to measure dendritic length. For each brain area, a total of 30 Ctip2+ neurons from three mice per group were analyzed.
Morris Water Maze Test
The water maze is a circular pool (1.6 m in diameter). In the hidden platform task, the platform (10 cm in diameter) was kept under water and maintained in the same position. The mice were trained to learn the hidden platform with four trials per day (the inter trial interval, ITI = 15 min) for 5 days. Each training trial lasted for 60 s (s). If the mice were unable to find the platform, they were guided to it by hand and were allowed to stay on it for 30 s. The swimming path of the mice was monitored using the ANY-Maze R tracking system (ANY-Maze, Stoelting Co., Wood Dale, IL, United States). After the last training trial of 24 h on day 5, the mice were subjected to a probe test in which the platform was removed and the mice were allowed to search for it for 60 s.
Rotarod Test
The rotarod test was conducted in the same experimental room for the open-field test. The mice were placed in a neutral position on a stationary rotarod (3 cm in diameter, ShangHai Biowill Co., Ltd., Shanghai). Timers were used to record the time to fall. Mice were tested on the rotarod at constant rotation speeds of 10, 20, 30, and 40 rpm/min.
Open-Field Test
The ANY-Maze R system was used to monitor locomotion of the mice. A 40 cm × 40 cm plexiglas chamber was set in a quiet laboratory room. The area, 10 cm to the walls, was defined as the wall area. During the test, the mouse was placed in the center of the open-field chamber and allowed to move for 10 min. After each testing trial, the chambers were thoroughly cleaned by 70% ethanol to get rid of odors left by animals tested in a previous trial. The total distance traveled and the time spent in different areas were recorded.
Data Analysis
Data were presented as the mean ± SEM. For behavioral data and cell counting data, analysis of variance (ANOVA) was conducted 1 https://imagej.nih.gov/ij/ to compare main genotype effects. For biochemical results, twotailed Student's t-test was performed to examine the difference between control and cKO mice; p < 0.05 ( * ) and p < 0.01 ( * * ) were considered statistically significant and highly significant, respectively.
Reduced Size of the Cortex in PDK1 cKO Mice
Early work has shown that PDK1 −/− mice display multiple abnormalities and die at embryonic day 9.5 (E9.5) (Lawlor et al., 2002). In this study, we have generated viable forebrain-specific PDK1 cKO (PDK1 f /f ;Emx1-Cre) mice. These mutant animals exhibited smaller cerebrum than controls did ( Figure 1A). To examine the inactivation pattern of PDK1 mediated by Cre recombination, the mTmG mouse (Muzumdar et al., 2007) was crossed to the Emxl-Cre mouse to obtain Emxl-Cre;mTmG. In the latter, the expression of green fluorescence protein (GFP) was mainly observed in the cortex and the hippocampus ( Figure 1B). To examine the inactivation efficiency of PDK1, we performed Western blotting for PDK1 using cortical homogenates. Significantly reduced PDK1 protein levels were observed in the cortex of PDK1 cKO mice at 3 weeks and 3 months of age ( Figure 1C). The residual amount of PDK1 was likely from cells including GABAergic neurons, blood cells, glial cells, and a small proportion of excitatory neurons that do not express the Cre recombinase. We further conducted double-immunostaining for NeuN and PDK1. We found that the majority of cortical NeuN positive (+) cells in PDK1 cKO mice were PDK1 negative. In contrast, neurons in control mice were doubly positive for NeuN/PDK1 ( Figure 1D).
Nissl staining was used to examine brain morphology. There was remarkable reduction on the size of the cortex in PDK1 cKO mice ( Figure 1E). The thickness of the cortex in PDK1 cKO mice was decreased by about 50% as compared to age-matched littermate controls (Figure 1Ea'-d'). Quantification data showed that the cortical volume was dramatically decreased in PDK1 cKO mice either at 3 weeks or 3 months ( Figure 1F). In contrast, the size of the cerebellum in PDK1 cKO mice was not decreased at either age (Figure 1E), likely due to that Cre recombinase was not expressed in cerebellar neurons. Moreover, the measurement on the averaged area of the cerebellum per section showed no significant difference between two genotypes at either age (3 weeks: control = 100 ± 2.6%, cKO = 103.1 ± 2.3%; 3 months: control = 100 ± 3.5%, cKO = 99.4 ± 3.7%; ps > 0.6, Student's t-test).
Loss of Mature Neurons in PDK1 cKO Mice
We measured the number of mature neurons using NeuN as a marker. First, IHC on NeuN revealed significant reductions in the cortex size and the total number of NeuN+ cells in PDK1 cKO mice aged at 3 weeks and 3 months (Figure 2A). Consistent with this, Western analysis confirmed reduced NeuN levels FIGURE 1 | Reduced size of the cortex in forebrain-specific PDK1 cKO mice. (A) Representative brain photos for a control mouse and a PDK1 cKO mouse at P0. (B) The expression pattern of GFP in the brain of a 3-week-old Emx1-Cre;mTmG mouse. GFP was mainly expressed in the cortex (CTX) and the hippocampus (HIP) but not the striatum (STR), the thalamus (THA), or the cerebellum. (C) Western analysis on PDK1. Cortical homogenates from mice aged at 3 weeks and 3 months (n = 3-4/group/age) were prepared. There was significant difference on protein levels of PDK1 between control and PDK1 cKO mice (3 weeks: control = 100 ± 4.6%, cKO = 58.0 ± 5.8%; 3 months: control = 100 ± 3.5%, cKO = 60.3 ± 2.1%; ps < 0.005). GAPDH served as the loading control. (D) Double-immunostaining for NeuN/PDK1 in mice at 3 weeks of age. Most NeuN+ cells in control brain were PDK1+ but very few NeuN+ cells in PDK1 cKO brain were PDK1+. Scale bar = 100 µm. (E) Nissl staining for mice at 3 weeks and 3 months. Boxed areas in control (a,c) and cKO (b,d) were enlarged as a', c', b', and d', respectively. (F) Relative cortical volume. There was significant difference on the size of the cortex between control and PDK1 cKO mice at 3 weeks (control = 100 ± 9.9%, cKO = 24.1 ± 2.0%; n = 3-4 mice/group; p < 0.001) or 3 months (control = 100 ± 4.4%, cKO = 14.9 ± 1.9%; n = 3-4 mice/group; p < 0.001).
( Figure 2B). Second, a stereological cell counting method was used to count NeuN+ cell number (West and Gundersen, 1990;Tabuchi et al., 2009;Chen et al., 2010). Our results showed that the total number of NeuN+ cells was dramatically decreased in PDK1 cKO mice ( Figure 2C). The measurement on the diameter for the cell body of NeuN+ cells indicated smaller size of neurons in PDK1 cKO mice than in controls ( Figure 2D). However, the density of NeuN+ cells in PDK1 cKO mice was increased as compared to control animals ( Figure 2E). This finding was in agreement with that reported on PDK1 conditional knock-in mice (Cordon- Barris et al., 2016). Overall, conditional deletion of PDK1 in the forebrain resulted in remarkable loss of mature neurons.
To investigate whether neuron loss occurred via apoptosis, we conducted the TUNEL assay (Tabuchi et al., 2009). Brain sections of PDK1 cKO mice aged at P0, 1 week, and 3 weeks were used. We found that the total number of TUNEL+ cells in the cortex of PDK1 mutants was increased (Figure 2F and Supplementary Figure 1). Cell counting results confirmed that the averaged number of TUNEL+ cells per section in PDK1 cKO mice was significantly larger than that in controls ( Figure 2G: F = 60, df1/8, p < 0.001 and Supplementary Figure 1), suggesting enhanced apoptotic cell death. To identify which cell type underwent apoptosis, we performed double-staining for TUNEL/NeuN, TUNEL/Tuj1, or TUNEL/GFAP ( Figure 2H). Only TUNEL+/NeuN+ and TUNEL+/Tuj1+ cells (Figure 2H, indicated by white arrows) were observed in PDK1 cKO mice as compared to controls. No TUNEL+/GFAP+ cells were detected ( Figure 2H).
We next conducted FIHC for PH3, a marker for NPCs at the M-phase of the cell cycle. There was no qualitative difference in the immunoreactivity of PH3 between control and PDK1 cKO embryos (Supplementary Figure 2C). The averaged number of PH3+ cells in the surface of the VZ for PDK1 cKO embryos was not significantly decreased (Supplementary Figure 2D: p > 0.2 for each age). Overall, the self-renewal or proliferation of NPCs was not impaired in PDK1 cKO mice.
Loss of Synapses and Dendrites in PDK1 cKO Mice
To study whether the morphology of synapses and dendrites was affected in PDK1 cKO mice, we first conducted FIHC on synaptophysin (SVP38), a marker for pre-synaptic element. Qualitatively reduced intensity of SVP38 immunoreactivity was FIGURE 3 | Loss of synapses and dendrites in PDK1 cKO mice. (A) IHC for SVP38. Qualitatively altered immunoreactivity of SVP38 in the cortex and the hippocampus in PDK1 cKO mice aged at 3 weeks (a-d) or 3 months (e-h). Scale bar = 100 µm. (B) Double-staining of MAP2/Ctip2 in the cortex (a,b) and the hippocampus (c,d) of mice at 3 weeks. Dendrites, neurons, and cell bodies were labeled by MAP2 (green), Ctip2 (red), and DAPI (blue), respectively. Scale bar = 100 µm. (C) There were significant differences on the averaged dendritic length for neurons in layer V of the cortex and in hippocampal CA1 area of control and PDK1 cKO mice (30 Ctip2+ neurons from 3 mice per brain area per group; * * p < 0.01). (D) Double-staining of MAP2/Ctip2 in mice at 3 months (a-d). Scale bar = 100 µm. (E) The averaged length for dendrites was significantly different between two groups (30 Ctip2+ neurons from 3 mice per brain area per group; * * p < 0.01). observed in the cortex and the hippocampus of PDK1 cKO mice across ages (Figure 3A), suggesting loss of synapses. In addition, Western analyses on SVP38 and post-synaptic density 95 (PSD95) showed that their levels were significantly decreased in PDK1 cKO mice (data not shown). We then performed double-immunostaining of MAP2/Ctip2, markers for dendrites and pyramidal neurons in cortical layer V, respectively. We found that the immunoreactivity of MAP2 was qualitatively reduced in the cortex of PDK1 cKO mice as compared to age-matched littermate controls (Figures 3B,D), suggesting loss of dendrites. We further measured the length of dendrites of Ctip2+ neurons. The averaged dendritic length was significantly decreased in PDK1 cKO mice either at 3 weeks ( Figure 3C) or 3 months ( Figure 3E). For neurons in cortical layer V, there was more than 80% of reduction on the averaged dendritic length in PDK1 mutants. For pyramidal neurons in hippocampal CA1, the averaged dendritic length was also dramatically reduced in PDK1 cKO mice (Figures 3C,E).
Astrocytosis in PDK1 cKO Mice
To study whether there were changes on glial cells in the cortex of PDK1 cKO mice, first, IHC on GFAP was performed. Increased number for GFAP+ cells was observed in the cortex of PDK1 cKO mice. The increase in the immunoreactivity of GFAP was subtle in PDK1 cKOs at 3 weeks (Figure 4Aa,b) but more robust at 3 months (Figure 4Ac,d), suggesting progressive FIGURE 5 | Learning deficit in PDK1 cKO mice. (A) Escape latency for control and PDK1 cKO mice which were trained with a hidden platform task for 5 consecutive days. There was significant difference on the latency to escape between the two genotype groups (n = 6/group). PDK1 cKO mice showed no improvement on their performance during the 5-day training. (B) The length of swim-path. There was significant difference between control and PDK1 cKO mice across the training period.
(C) Quadrant occupancy in a probe test conducted 24 h after the last training trial. There was significant difference on the time spent in the target quadrant between control and PDK1 cKO mice ( * p < 0.05). There was also significant difference in the average time spent in the remaining three quadrants (adjacent left, adjacent right, and opposite) between control and PDK1 cKO animals ( * p < 0.05). (D) Latency to fall from a rotarod. Four different rotating speeds including 10, 20, 30, and 40 rpm/min were used. There was no significant difference on the latency to fall between control (n = 10) and PDK1 cKO mice (n = 6) at lower speeds (ps > 0.1). There was significant difference between control and PDK1 cKO mice at the highest speed ( * * * * p < 0.001).
astroglial activation. Second, IHC on Iba1 was conducted. The immunoreactivity of Iba1 in control and PDK1 cKO mice did not qualitatively differ at either age ( Figure 4B), indicating no significant microgliosis. Since the Cre recombinase was also expressed in astrocytes in PDK1 f /f ;Cre mice, this suggested a possibility that loss of PDK1 may affect the cell number of astrocytes. We conducted doubleimmunostaining for GFAP/PDK1. We found that GFAP+ cells were largely PDK1 negative in PDK1 cKO mice (Figure 4C), indicating that PDK1 was inactivated in astrocytes. Moreover, it has been demonstrated that astrocytosis is associated with neuron loss in neurodegenerative mouse models (Saura et al., 2004;Tabuchi et al., 2009;Cheng et al., 2015). Overall, astroglial activation in PDK1 cKO mice may be due to neuronal death and a cell autonomous function of PDK1 in astrocytes.
Learning Deficit in PDK1 cKO Mice
To determine whether cognitive ability of PDK1 mutant mice was affected, we used a Morris water maze task to test spatial learning. In this task, 2-to 3-month-old mice were trained to learn a hidden platform for 5 days. During the 5-day training period, there was no improvement on the latency to escape in PDK1 cKO mice. ANOVA confirmed a highly significant main genotype effect (F = 46.1, df = 1/10, p < 0.001) between two groups of mice across 5 days ( Figure 5A). There was also significant genotype effect (F = 11.6, df = 3.0/30.1, p < 0.001) on the length of swim-path ( Figure 5B). Overall, these results indicated that spatial learning was severely affected in PDK1 cKO mice.
One day after the last training trial, the mice were subjected to a probe test, in which no platform was available in the water. The average time spent in the target and other quadrants of the water maze was analyzed ( Figure 5C). Significant genotype effect (F = 8.5, df = 1/10, p < 0.05) and significant quadrant × genotype effect (F = 4.0, df = 2.4/23.9, p < 0.05, Greenhouse-Geisser correction) were observed, suggesting that no spatial memory had formed in PDK1 cKO mice. We also found that the control mouse preferred to search for the target quadrant but the PDK1 cKO swam randomly during the probe test (data not shown). Next, a rotarod task was conducted to examine motor learning. A number of different rotating speeds including 10, 20, 30, and 40 rpms/min were used. The latency to fall from the rotating rod was averaged ( Figure 5D). ANOVA revealed a significant speed × genotype effect (F = 22.8, df = 1.5/21.0, p < 0.001). At lower speeds including 10, 20, or 30 rpm/min, there were no significant genotype effects on the latency to fall between control and PDK1 cKO mice (ps > 0.1). However, there was significant genotype effect at the speed of 40 rpm/min (p < 0.001), likely suggesting impairment on motor learning.
Decreased Activities for Akt and mTORC1 in PDK1 cKO Mice
Our biochemical analysis showed that levels of total Akt (T-Akt) were not changed in PDK1 cKO mice at either age ( Figure 6A, ps > 0.2), indicating unaltered Akt expression. In contrast, relative levels of pAkt Thr308 were decreased in PDK1 FIGURE 6 | Levels of pAkt and pGSK3 in PDK1 cKO mice. (A) Western analyses on pAkt and total Akt (T-Akt). There were significant differences on relative levels of pAkt T308 and pAkt S473 between control and PDK1 cKO mice at 3 weeks and 3 months ( * * p < 0.01; n = 3-4/group/age). There was no significant difference on T-Akt levels between control and PDK1 cKO mice at either age (ps > 0.2). GAPDH served as the loading control. (B) Western analyses on pGSK3α, pGSK3β, total GSK3α, and total GSK3β. There were significant differences on relative levels of pGSK3α S21 and pGSK3β S9 between control and PDK1 cKO mice at 3 weeks and 3 months ( * * p < 0.01). There were no significant differences on levels of total GSK3α and GSK3β between control and PDK1 cKO mice (ps > 0.5). GAPDH served as the loading control.
Previous evidence has demonstrated that the mTOR signaling is critical for neuronal survival (Sarbassov et al., 2005;Kim et al., 2010) and dendritic morphology (Jaworski et al., 2005;Kumar et al., 2005). To investigate whether mTORC1 activity was affected, we examined pS6K and pS6 using cortical samples at 3 months. We found that levels for total S6K and S6 were not changed in PDK1 cKO mice (Figures 7C,D, ps > 0.5). However, those for pS6K Thr389 and pS6 Ser235/236 were dramatically decreased (ps < 0.05). Overall, mTORC1 activity was decreased in PDK1 cKO mice.
DISCUSSION
PDK1 is a key member in the PI3K signaling (Mora et al., 2004;Engelman et al., 2006) and has been implicated in neurological diseases (Liu et al., 2011;Pietri et al., 2013). Early embryonic lethality of PDK1 −/− mice excludes the possibility to study whether loss of endogenous PDK1 affects neuronal survival during cortical development. In this study, viable forebrainspecific PDK1 cKO mice were generated. The following novel findings were reported. First, conditional deletion of PDK1 in the forebrain causes dramatic neuron loss and increased apoptotic cell death. Second, conditional deletion of PDK1 in the forebrain results in impaired mTOR activity.
Microcephaly has been observed in early work using PDK1 cKO line in which PDK1 is conditionally inactivated in neurons and astrocytes of the whole brain (Chalhoub et al., 2009). Given that PDK1 is specifically deleted in the forebrain of PDK1 f /f ;Emx1-Cre mice, our findings are somehow consistent with those reported previously (Chalhoub et al., 2009;Itoh et al., 2016). First, the current model differs from the line Chalhoub et al. (2009), in that the phenotype in the cerebellum is not the same. This is likely due to that the inactivation of PDK1 does not occur in neurons of the cerebellum in our cKO line. Second, unlike PDK1 f /f ;GFAP-Cre (Chalhoub et al., 2009) or PDK1 f /f ;Emx1-Cre (this study), PDK1 f /f ;Nestin-Cre mice die shortly after birth (Itoh et al., 2016) and therefore cannot be used for the study on postnatal brain. Third, PDK1 f /f ;Nex-Cre mice survive to adulthood and display cortical lamination defect (Itoh et al., 2016). Moreover, it has been nicely demonstrated that abnormal cortical lamination in PDK1 f /f ;Nex-Cre mice is caused by impairment on PDK1/Akt-dependent neuronal migration (Itoh et al., 2016).
Since the total number of NeuN+ cells is dramatically reduced and the length of dendrites is significantly decreased in PDK1 cKO mice, these could directly lead to the formation of small cortex. Overall, endogenous PDK1 may negatively regulate apoptosis in the cortex and therefore control the brain/cortex size. Interestingly, apoptosis is involved in cell death in brain diseases displaying age-related neuron loss. First, previous studies have shown that increased apoptosis is associated with neuron loss in neurodegenerative mouse models (Feng et al., 2004;Tabuchi et al., 2009;Wines-Samuelson et al., 2010;Cheng et al., 2015). Second, early work has demonstrated increased apoptotic cell death in the brain of AD (Su et al., 1994;Lassmann et al., 1995;Anderson et al., 1996).
Since increased number of TUNEL+/NeuN+ or TUNEL+/Tuj1+ cells was found in PDK1 cKO mice, it was reasonable to conclude that neuronal apoptosis contributes to cortical neuron loss. In contrast, neuron loss in PDK1 cKO mice is unlikely due to proliferation of NPCs during development, since our results showed that the self-renewal of NPCs was not impaired as revealed by the BrdU pulse-labeling and PH3 IHC experiments.
To explore the underlying molecular mechanisms, we focused on the Akt/mTOR pathway. First of all, our biochemical results on pAkt Thr308 and pAkt substrates have strongly suggested that conditional inactivation of PDK1 leads to reduced Akt activity. Second, our analyses on pS6K and pS6 indicated decreased activity of mTORC1 in PDK1 cKO mice. Third, PDK1 f /f ;Emx1-Cre mice exhibit increased pGSK3 levels, which may be caused by enhanced PKA activity. Consistent with this notion, previous evidence has shown that PKA inhibits GSK3 by phosphorylating the Ser21/Ser9 of GSK3α/3β (Fang et al., 2000), and that the phosphorylation of GSK3 by PKA does not require activation of Akt (Li et al., 2000). Overall, since PDK1, Akt, or mTOR is important for cell/neuron survival and dendritic morphogenesis (Datta et al., 1997;Jaworski et al., 2005;Kumar et al., 2005;Sarbassov et al., 2005;Kharebava et al., 2008;Kim et al., 2010), it is reasonable to conclude that PDK1 may control neuronal survival during cortical development via the activation of Akt/mTOR signaling.
In this study, we report that PDK1 cKO mice display deficit on spatial learning and memory. Since it has been demonstrated that neuron loss and synaptic loss directly contribute to cognitive impairments in neurodegenerative disease (Gomez-Isla et al., 1997;Terry, 2000), we reason that learning deficit in PDK1 cKO mice is likely caused by massive loss of neurons and synapses in the cortex. However, since there was an increase in open-field activity in PDK1 cKO mice, it cannot be ruled out that increased anxiety may directly cause learning impairment or exacerbate learning deficit in PDK1 mutant mice. | 2017-10-20T17:05:54.414Z | 2017-10-20T00:00:00.000 | {
"year": 2017,
"sha1": "e1f8516d6f6bec9d78d573d415ee776234df5d00",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fncel.2017.00330/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e4b69ba06eadbea0a5f25b07c335af362857b75f",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
10519653 | pes2o/s2orc | v3-fos-license | Long Non-Coding RNAs: Critical Players in Hepatocellular Carcinoma
Hepatocellular carcinoma (HCC) is a complex disease with multiple underlying pathogenic mechanisms caused by a variety of etiologic factors. Emerging evidence showed that long non-coding RNAs (lncRNAs), with size larger than 200 nucleotides (nt), play important roles in various types of cancer development and progression. In recent years, some dysregulated lncRNAs in HCC have been revealed and roles for several of them in HCC have been characterized. All these findings point to the potential of lncRNAs as prospective novel therapeutic targets in HCC. In this review, we summarize known dysregulated lncRNAs in HCC, and review potential biological roles and underlying molecular mechanisms of lncRNAs in HCC. Additionally, we discussed prospects of lncRNAs as potential biomarker and therapeutic target for HCC. In conclusion, this paper will help us gain better understanding of molecular mechanisms by which lncRNAs perform their function in HCC and also provide general strategies and directions for future research.
Introduction
Hepatocellular carcinoma (HCC) is one of the most common cancers in the world as more than 700,000 cases are diagnosed and approximately 600,000 deaths are reported annually, especially in, East and Southeast Asia, Africa and Southern Europe [1,2]. This disease is often associated with an extremely poor prognosis because patients are either diagnosed at a very late stage or experience recurrence and metastasis after surgical resection [3]. It is well known that a variety of risk factors have been associated with the incidence of HCC, such as hepatitis B virus (HBV) and hepatitis C virus (HCV) infection, aflatoxin B1 intake, tobacco smoking, alcoholic cirrhosis, and so on [4]. Poor understanding of the mechanisms underlying the pathogenesis of HCC makes it difficult to be diagnosed and treated at an early stage, thus an urgent needs to elucidate the molecular mechanisms underlying HCC and to identify and provide effective targets for therapy or early detection of HCC. Although significant advances have been made in recent decades, our understanding of the underlying molecular mechanisms of HCC remains limited and these investigations have largely focused on the role of protein-coding genes and some classic epigenetic factors, including microRNAs (miRNAs), DNA methylation, and several types of histone modifications involving histone methylation and acetylation [4][5][6][7][8][9][10].
In recent years, advancements in genome-wide analyses of the mammalian transcriptome have revealed a novel class of transcripts, long noncoding RNAs (lncRNAs), which are pervasively transcribed in the genome [11]. LncRNAs are arbitrarily defined as transcripts of more than 200 nucleotides (nt) in length that lack significant open reading frames (ORF) and can be localized to both the nucleus and cytoplasm [12,13]. Accumulating evidence indicates that lncRNAs are not the "dark matter" of the genome, but that they play significant roles in various biological processes through complicated mechanisms, including X-inactivation, genomic imprinting, cell differentiation, cell apoptosis, stem cell pluripotency, nuclear trafficking, heat shock response, and genome rearrangement [14]. It is noteworthy that an increasing number of studies have demonstrated lncRNAs as a new class of regulatory molecules that are involved in a variety of human disease, especially cancer, through modulating gene expression at transcriptional, post-transcriptional or epigenetic level [15]. Some classical lncRNAs have been found to be dysregulated in a variety of cancers and have been shown to possess clinical potential as diagnostic biomarkers and therapeutic targets due to their aberrant expression is significantly associated with carcinogenesis, metastasis or prognosis, such as H19, HOTAIR, MALAT1, MEG3, and XIST [16,17]. HCC, as the most common type of primary liver cancer, is revealing the potential roles of lncRNAs in hepatocellular carcinoma and is attracting increased attention in cancer research in recent years. Excitingly, some significant and substantial progress has been made toward identifying and functional characterizing HCC-related lncRNAs.
In this review, we focus our attention on lncRNAs that are involved in HCC. Firstly, we summarize known dysregulated lncRNAs in HCC, and then we review potential biological roles and underlying molecular mechanisms of lncRNAs in HCC. Finally, we discuss prospects of lncRNAs as potential biomarker and therapeutic target for HCC.
All in all, this paper will help us gain a better understanding of molecular mechanisms by which lncRNAs perform their function in HCC and also provide general strategies and directions for future research.
Dysregulation of Long Non-Coding RNAs (lncRNAs) in Hepatocellular Carcinoma
It has been shown that most lncRNAs are expressed in a tissue/cell type-specific pattern or in a developmental stage specific manner and are transcribed by RNA polymerase II (RNA Pol II), as well as possess a 5'-methyl cap and a polyadenylated tail, similar to mRNAs, indicating these lncRNAs can be tightly regulated and may play specific biological roles in a variety of biological processes and human disease, and can also lead to undesired biological consequences when dysregulated. Indeed, a growing body of evidence has indicated that dysfunctional lncRNAs are implicated in a broad range of cancers, including HCC. So far, a handful of dysregulatd lncRNAs that are associated with HCC have been identified (see Table 1).
H19, a 2.3-kb lncRNA, is a famous paternally imprinted (maternally expressed) gene and is highly expressed from the early stages of embryogenesis to fetal life in many organs but is almost entirely down-regulated postnatally and it plays important roles in embryonic development and growth control [18]. Emerging evidence showed that erasure of H19 imprinting and subsequent high expression level of H19 was associated with tumor growth, metastasis and invasion of several types of cancer. [19][20][21]. Kim and Lee (1997) firstly found that the expression of H19 usually shift from monoallelic to biallelic in HCC and it might play a causal role in the epigenetic mechanism involved in tumor development and/or process [22]. Later, H19 RNA level was shown to be up-regulated in HBV-associated HCC [23]. Additionally, Matouk et al. (2007 and revealed that hypoxia could strongly up-regulated the level of H19 RNA in the HCC cell line [24,25]. HOTAIR (Homeobox antisense intergenic RNA), an ncRNA with a length of 2158 bp, is transcribed from the antisense strand of homeobox C gene locus in chromosome 12. A large number of studies have shown that HOTAIR is up-regulated in various cancers and correlates with carcinogenesis and metastasis, as well as poor prognosis [26]. A previous study from Geng et al. reported that HOTAIR expression was significantly higher in hepatocellular carcinoma (HCC) tissue than that in adjacent noncancerous tissues [27]. A recent study performed in HCC also found that HOTAIR was overexpressed in HCC patients and was associated with a worse prognosis and an increased risk of metastasis in these patients [28].
HOTTIP (HOXA transcript at the distal tip), an lncRNA transcribed from the 5' end of the HOXA locus that regulates the activation of some HOXA genes in vivo [29]. A recent study from Quagliata et al. reported that HOTTIP was significantly up-regulated in HCC specimens and its high expression level was associated with metastasis formation and poor patient survival in HCC [30].
HULC (highly up-regulated in liver cancer), a 500 bp spliced lncRNA, was first identified as a novel mRNA-like noncoding RNA that up-regulated remarkably in HCC by Panzitt et al. [31]. HULC expression has been reported to be regulated by the transcription factor CREB (cyclic adenosine monophosphate responsive element binding protein) in Hep3B cells [32]. Interestingly, the expression level of HULC is positively associated with those of hepatitis B virus X protein (HBx) in clinical HCC tissues. Moreover, HBx could up-regulate HULC expression level in L-O2 cells (a human immortalized normal liver cell line) and HepG2 cells (a human hepatoma cell line) [33].
KCNQ1OT1 (potassium voltage-gated channel, KQT-like subfamily, member 1 overlapping transcript 1), a maternally imprinted lncRNA transcribed from KCNQ1 locus and responsible for transcriptional silencing a bunch of genes at KCNQ1 locus in cis by modulating histone methylation, has been found to be involved in various types of cancers [34]. A recent study showed that a short tandem repeat (STR) polymorphism (rs35622507) within the KCNQ1OT1 coding region was identified as the risk conferring polymorphism for HCC in the Chinese population and a significant genotype-phenotype correlation in which the protective genotypes (heterozygote and non-10) of the STR polymorphism confer increased KCNQ1OT1 expression and partially decreased CDKN1C expression in vitro [34].
Linc-RoR (long non-coding RNA regulator of reprogramming), a large intergenic noncoding RNA with a length of 2.6-kb, was previously identified as a key reprogramming regulator and whose expression is connected to pluripotency via regulating the key pluripotency transcription factors (TFs) including Oct4, Sox2, and Nanog as a competing endogenous RNA (ceRNA) [35,36]. Interestingly, Takahashi et al. reveled that expression of linc-RoR was up-regulated in malignant cells compared to non-malignant hepatocytes and increased in responses to hypoxia [37].
MALAT1 (metastasis-associated lung adenocarcinoma transcript1), an lncRNA originally identified to be overexpressed in patients at high risk for metastasis of non-small cell lung tumors (NSCLC), was up-regulated in many solid tumors and associated with cancer metastasis and recurrence [38]. MALAT1 has been shown to be up-regulated in HCC cell lines and clinical tissue samples [38,39].
MEG3 is a maternal imprinted gene highly expressed in the human pituitary. It is able to interact with cyclic AMP, p53 (Tumor protein p53), and growth differentiation factor 15 (GDF15) and plays an important role in cell proliferation control. Decrease of MEG3 expression has been observed in several types of cancer [40]. Huang et al. (2007) found that MEG3 is down-regulated in HCC compared to normal liver tissues [41]. A later study also showed that MEG3 expression was markedly reduced in four human HCC cell lines compared with normal hepatocytes, and overexpression of MEG3 in HCC cells dramatically inhibited HCC cell growth, as well as MEG3 expression could be regulated by microRNA-29 [42].
PCNA-AS1 (proliferating cell nuclear antigen antisense RNA 1), an antisense long noncoding RNAs located on the opposite strand of gene proliferating cell nuclear antigen (PCNA), was recently found to be significantly up-regulated in HCC compared with peritumoral tissues by Yuan et al. [43].
In particular, the unprecedented advances in high-throughput screening technologies, such as microarrays and transcriptome sequencing, facilitate large-scale identification and characterization of novel disease-related genes, including lncRNAs. Excitingly, some papers have revealed the lncRNA expression profiles in HCC samples and paired non-tumor samples using microarray, and a set of HCC-related lncRNAs have been identified. For example, lncRNA-DREH (down-regulated expression by HBx), an lncRNA differentially expressed between livers of HBx transgenic mice and wild-type mice, was identified by microarray and the expression level of its human ortholog RNA, hDREH, was frequently down-regulated in HBV-related HCC tissues in comparison with the adjacent noncancerous hepatic tissues, and its decrease significantly, dramatically, associated with poor survival in HCC patients [44]. By comparing the lncRNA expression profiles of HBV-related HCC and paired peritumoral tissue, Yang et al. found lncRNA-HEIH (high expression in HCC), one of differentially expressed lncRNA, was highly expressed in HBV-related HCC and was significantly correlated with recurrence [45]. LncRNA-MVIH (microvascular invasion in HCC), an lncRNA derived from microarray data that used for identification of lncRNA-HEIH, was also shown to be up-regulated in HCC [46].
LncRNA-LET (low expression in tumor), an lncRNA also derived from the same microarray data that used for identification of lncRNA-HEIH, was shown to be down-regulated in HCC [47]. By comparing lncRNA expression levels between TGF-β treated and untreated SMMC-7721 hepatoma cells using microarray, Yuan et al. found lncRNA-ATB (lncRNA activated by TGF-β), was highly expressed in HCC and associated with poor prognosis in HCC [48]. Additionally, a recent study revealed lncRNA-hPVT1 (human plasmacytoma variant translocation 1), the human ortholog of lncRNA-mPVT1 (mouse plasmacytoma variant translocation 1) that was a fetal liver-specific lncRNAs identified by microarray analysis, is significantly up-regulated in HCC tissues and high hPVT1 expression is associated with poor prognosis in HCC patients [49].
uc002mbe.2, a TSA (Trichostatin A)-induced lncRNA, was strongly expressed in TSA-treated Huh7 cells. Yang et al. found that uc002mbe.2 had more than 300-folds induction upon TSA treatment and its expression level was significantly lower in the HCC cell lines and liver cancer tissue compared with normal human hepatocytes and adjacent noncancerous tissues [50].
URHC (up-regulated in hepatocellular carcinoma), an lncRNA was highly expressed in hepatoma cells and HCC tissues and was originally identified by comparing lncRNA expression profiling of three HCC cell lines and normal hepatocytes using lncRNA microarray. Xu et al. revealed that the higher expression of URHC was correlated with poor overall survival [51].
Hepatocellular carcinoma (HCC) Growth
To date, many lncRNAs dysregulated in HCC have been demonstrated to play important roles in HCC growth in vitro or in vivo. In the study conducted by Matouk et al., the authors found that ablations of tumorigenicity of HCC in vivo was seen by H19 knockdown which also significantly abrogated anchorage-independent growth after hypoxia recovery [24]. In vitro assays in the HCC cell line Bel7402 demonstrated that knockdown of HOTAIR lincRNA could reduce cell proliferation [27]. Du et al. demonstrated that HULC could promote cell proliferation by MTT, colony formation assay, and tumorigenicity assay [33]. Quagliata et al. demonstrated that knockdown of HOTTIP could significantly reduce cell proliferation of HuH-6 and HuH-7 cell lines [30]. Yang et al. revealed that knockdown of lncRNA-HEIH could inhibit the proliferation of HCC cell by affecting cell cycle and the growth of tumors from lncRNA-HEIH-down-regulated xenografts were significantly inhibited when compared with that of tumors formed from control xenografts [45]. Yang et al. also demonstrated that lncRNA-MVIH could promote HCC growth both in vitro and in vivo [46]. Huang et al. revealed that suppression of cellular lncRNA-DREH could enhance the cell proliferation effect in vitro and its overexpression could repress the growth of tumor in vivo [44]. Recently, Takahashi et al. uncovered that knockdown of linc-RoR, a hypoxia-responsive lncRNA, could decrease cell viability in HCC cells during hypoxia [37]. Yuan et al. demonstrated lncRNA PCNA-AS1 could dramatically promote tumor growth in vitro and in vivo [43]. A recent study revealed lncRNA-hPVT1 could promote cell proliferation, cell cycling and stem cell-like phenotype of HCC cells in vitro and promote HCC growth in vivo [49]. Additionally, another recent study demonstrated that URHC inhibition could reduce the proliferation of HCC cells [51].
HCC Invasion and Metastasis
It is well known that the poor prognosis and high recurrence rate of HCC is largely due to the high incidence of intrahepatic and extrahepatic metastases [58]. Thus, the inhibition of invasion and metastasis is of great importance in HCC therapies. There is now increasing evidence that lncRNAs play important roles in invasion and metastasis of HCC. For example, Huang et al. demonstrated that overexpression of lncRNA-Dreh could inhibit tumor metastasis in vivo by establishing orthotopic liver implanted metastatic models and peripheral intravascular implanted metastatic models [44]. Yuan et al. revealed that lncRNA-MVIH overexpression resulted in significantly frequent intrahepatic metastasis by establishing the liver metastasis tumor model [46]. Lai et al. found that inhibition of MALAT1 in HepG2 cells could effectively reduce cell motility and invasiveness [39]. Yang et al. found that the low expression of lncRNA-LET is involved in cell invasion under hypoxic or normoxic conditions and its overexpression can inhibit the metastasis of HCC in vivo [47]. Additionally, a recent study revealed lncRNA-ATB, an lncRNA activated by TGF-β, can induce EMT and cell invasion in vitro and promote the invasion-metastasis cascade of HCC cells in vivo [48].
HCC Apoptosis
Some studies have demonstrated that lncRNAs are involved in HCC via acting on cell apoptosis. Braconi et al. revealed that MEG3 expression was markedly reduced in four human HCC cell lines, compared with normal hepatocytes, and enforced expression of MEG3 in HCC cells significantly decreased both anchorage-dependent and -independent cell growth, and induced apoptosis [42]. Yang et al. found that the TSA-induced uc002mbe.2 expression was positively correlated with the apoptotic effect of TSA in HCC cells and knockdown the expression of uc002mbe.2 significantly reduced TSA-induced apoptosis of Huh7 cells [50]. In addition, Xu et al. demonstrated that knockdown of the expression of URHC could promote apoptosis of HCC cells [51].
LncRNA-Protein Interaction
A large number of studies have revealed that many lncRNAs exert their function through interaction with proteins or protein complexes, especially with epigenetic complexes, such as polycomb repressive complex 1 (PRC1) and polycomb repressive complex 2 (PRC2) [59]. Some HCC-related lncRNA have been demonstrated to play roles in tumorigenesis via forming ribonucleoprotein (RNP) ( Figure 1A). For example, it has been found that H19 can specifically associate with enhancer of zeste homolog 2 (EZH2), a key subunit of the PRC2 complex, and inhibit E-cad expression by directly suppressing E-cad transcription and by indirectly activating Wnt signaling [21]. Tsai et al. demonstrated that HOTAIR served as a scaffold for two distinct histone modification complexes, PRC2 and LSD1/CoREST/REST complex. The ability to tether two distinct complexes enables RNA-mediated assembly of PRC2 and LSD1 and coordinates targeting of PRC2 and LSD1 to chromatin for coupled H3K27 methylation and H3K4 demethylation [53]. Wang et al. revealed HOTTIP RNA could bind the adaptor protein WDR5 directly and targets WDR5/MLL complexes across HOXA, driving H3K4 trimethylation and gene transcription [29]. Pandey et al. found that KCNQ1OT1 could interact with chromatin and with the H3K9-and H3K27-specific histone methyltransferases G9a and the PRC2 complex in a lineage-specific manner [54]. Kaneko et al. uncovered MEG3 interacted with PRC2 mainly through the RBR of JARID2 and MEG3 acts in trans on PRC2 and JARID2 by facilitating their recruitment to a subset of target genes [57]. It was found that LncRNA-DREH could specifically associate with protein vimentin, a type III intermediate filament (IF) and the major cytoskeletal component of mesenchymal cells [44]. A recent study found LncRNA-HEIH also can associate with EZH2, and this association is required for the repression of EZH2 target genes in HCC, involving p15, p16, p21 and p57 [45]. Yuan et al. demonstrated that lncRNA-MVIH could activate angiogenesis by interacting with PGK1, a protein secreted by tumor cells and inhibit angiogenesis, and inhibiting its secretion [46]. In addition, another study revealed that lncRNA-LET could bind to NF90, a double-stranded RNA-binding protein that has been implicated in the stabilization, transport, and translational control of many target mRNAs, and decreases HIF1-α and CDC42 mRNA stability through its association with NF90 under hypoxic and normoxic conditions, respectively [47].
LncRNA-MicroRNA Interaction
Interestingly, several recent reports have provided a model that suggests that lncRNA may function as competing endogenous RNA (ceRNA) in modulating the concentration and biological functions of microRNAs. These lncRNAs act as miRNA "sponges" generally share microRNA response elements (MREs) with the transcripts of several important genes and inhibiting normal miRNA targeting activity on mRNA. Several HCC-related lncRNA have been identified as miRNA "sponges" ( Figure 1B). For example, it is found that vertebrate H19 harbors both canonical and non-canonical binding sites for the let-7 family of microRNAs, which plays key roles in development and cancer. Kallen et al. demonstrate that H19 modulates let-7 availability by acting as a molecular sponge using H19 knockdown and overexpression, as well as in vivo crosslinking and genome-wide transcriptome analysis [18]. Liu [48]. However, it is noteworthy that a recent study from Bartel and Stoffel lab found target derepression of ceRNAs was in a threshold-like manner at high target site abundance and this threshold was insensitive to the effective levels of the miRNA via quantitating miRNA and target abundance. Strikingly, they concluded that modulation of miRNA target abundance is unlikely to cause significant effects on gene expression and metabolism through a ceRNA effect in vivo, supporting that endogenous lncRNAs might actually function as ceRNA is highly unlikely [60].
LncRNA-mRNA Interaction
Accumulating evidence indicates that lncRNA can influence mRNA processing and post-transcriptional regulation via forming lncRNA-mRNA duplex depends on complementary base pairing, involving control of splicing, translation, and mRNA stability [61]. Some HCC-related lncRNA have been demonstrated that they can directly bind to target mRNA to exert post-transcriptional regulation ( Figure 1C). For example, PCNA-AS1, antisense to PCNA, could increase PCNA mRNA stability via forming lncRNA-mRNA hybridization in HCC [43]. Additionally, Yuan et al. found that lncRNA-ATB specially increased the stability of IL-11 mRNA, which depends on the binding of IL-11 mRNA in HCC [48].
Conclusions and Perspectives
Hepatocellular carcinoma is a complex disease with multiple underlying pathogenic mechanisms caused by a variety of risk factors, and a better understanding of molecular mechanisms will help to identify potential molecular targets for diagnosis and therapy. In recent years, long noncoding RNA are gaining the attention of researchers in many fields, particularly in cancer and a large number of lncRNAs have been identified and there is an exponential growth of studies on the biological functions of lncRNAs in human cancers, including HCC.
LncRNAs often exhibit spatially and temporally-regulated expression patterns that that are expressed from specific tissue/cell types [62]. Their specificity makes them accurate biomarkers for cancer diagnostics. Furthermore, it has been demonstrated that cancer-specific lncRNAs can be detectable in plasma and urine of patients [63][64][65]. For example, HULC, an lncRNA highly up-regulated in liver cancer and positively related with Edmondson histological grades or with hepatitis B (HBV)-positive status, could be detected in the plasma of HCC patients compared to healthy controls and its higher detection rates were observed in the plasma of patients with higher Edmondson grades or with HBV-positive status [63]. The novel potential biomarkers can be discovered through certain types of highly expressed cancer-associated lncRNAs.
Therapeutic benefit can be obtained through RNA-based therapeutic strategies, such as siRNA and microRNA, or using small molecule compounds designed specifically to interact with target lncRNAs or ribonucleoprotein complexes. The nucleotides drugs can be effectively delivered to the liver using viral and non-viral systems. For viral mediated delivery, several types of viral vectors can be used, such as adenoviral and retroviral vector. Viral vector-mediated RNA delivery to liver can be achieved via the hepatic artery, portal vein, or bile duct or by direct injection to the liver [66]. For non-viral approaches, a suite of synthetic delivery carriers for liver targeting has been developed, such as galactosylated liposomes [67], poly-L-glutamic acid-coated liposomes [68], octaarginine (R8)-modified lipid nanoparticles [69], pH-triggered and PEGylated nanoparticles [70].
Although the roles played by lncRNAs in HCC have just begun to be revealed, with rapid development of high throughput detection technologies, such as microarray and RNA-sequencing and available bioinformatics tools for lncRNAs functional analysis, an increasing number of HCC-related lncRNAs will be identified and characterized. This will provide new insights into the complicated lncRNAs regulatory network, and ultimately provide novel strategies for HCC clinical diagnosis and treatment. | 2015-09-18T23:22:04.000Z | 2014-11-01T00:00:00.000 | {
"year": 2014,
"sha1": "b6f6f87b1af151f25edfa592eeee5b1f2fa4c85c",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1422-0067/15/11/20434/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b6f6f87b1af151f25edfa592eeee5b1f2fa4c85c",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
53187695 | pes2o/s2orc | v3-fos-license | Non-Coding Micro RNAs and Hypoxia-Inducible Factors Are Selenium Targets for Development of a Mechanism-Based Combination Strategy in Clear-Cell Renal Cell Carcinoma—Bench-to-Bedside Therapy
Durable response, inherent or acquired resistance, and dose-limiting toxicities continue to represent major barriers in the treatment of patients with advanced clear-cell renal cell carcinoma (ccRCC). The majority of ccRCC tumors are characterized by the loss of Von Hippel–Lindau tumor suppressor gene function, a stable expression of hypoxia-inducible factors 1α and 2α (HIFs), an altered expression of tumor-specific oncogenic microRNAs (miRNAs), a clear cytoplasm with dense lipid content, and overexpression of thymidine phosphorylase. The aim of this manuscript was to confirm that the downregulation of specific drug-resistant biomarkers deregulated in tumor cells by a defined dose and schedule of methylselenocysteine (MSC) or seleno-l-methionine (SLM) sensitizes tumor cells to mechanism-based drug combination. The inhibition of HIFs by selenium was necessary for optimal therapeutic benefit. Durable responses were achieved only when MSC was combined with sunitinib (a vascular endothelial growth factor receptor (VEGFR)-targeted biologic), topotecan (a topoisomerase 1 poison and HIF synthesis inhibitor), and S-1 (a 5-fluorouracil prodrug). The documented synergy was selenium dose- and schedule-dependent and associated with enhanced prolyl hydroxylase-dependent HIF degradation, stabilization of tumor vasculature, downregulation of 28 oncogenic miRNAs, as well as the upregulation of 12 tumor suppressor miRNAs. The preclinical results generated provided the rationale for the development of phase 1/2 clinical trials of SLM in sequential combination with axitinib in ccRCC patients refractory to standard therapies.
Introduction
Despite advances in the treatment of patients with advanced clear-cell renal cell carcinoma (ccRCC) with anti-angiogenic agents, checkpoint inhibitors, and mammalian target of rapamycin (mTOR) inhibitors alone and in combination, durable responses are seen in about 30% of treated ccRCC patients [1][2][3][4][5][6][7][8][9][10][11][12][13]. A systematic review of the first line for metastatic renal carcinoma reported an average progression-free survival of 8.4 months with a range of 6.5 to 12.3 months, and an average overall survival of 24.4 months with a range of 18.5 to 32.9 months [14]. Based on the clinical data generated in patients with advanced cancer, resistance and the associated dose-limiting toxicities remain major
Hypoxia-Inducible Factors 1α and 2α (HIFs) and VHL Tumor Suppressor Gene
The molecular profiles of ccRCC tumors are summarized in Figure 1 and Table 1. HIFs are transcriptional factors that regulate the expression of over 200 genes involved in angiogenesis, tumor metastasis, and drug resistance. Unlike colorectal and head-and-neck tumors, ccRCC tumors feature a high incidence and intensity of constitutively expressed HIFs, as well as lower levels of VEGF and prolyl hydroxylase 2 (PHD2), with no detectable prolyl hydroxylase 3 (PHD3), as assessed by immunohistochemistry (Table 1). feature a high incidence and intensity of constitutively expressed HIFs, as well as lower levels of VEGF and prolyl hydroxylase 2 (PHD2), with no detectable prolyl hydroxylase 3 (PHD3), as assessed by immunohistochemistry (Table1). The data for HIFs and VEGF were generated by our laboratory [20,22], while others are from published reports [13,63].
Our laboratory was the first to report that constitutively expressed HIF1α and HIF2α (Table 1, Figure 2) are selenium targets (adopted from References [20,32]). The data in Figure 2 show that the inhibition of constitutively expressed HIF1α and HIF2α in RC2 and 786.0 Clear-cell RCC cells, and HIF1α in FaDu head and neck [32], A548 lung carcinoma cells, and HT29 colorectal tumor cells is selenium dose-dependent and independent of the disease site/cell type. Unlike other HIF-targeting agents, selenium inhibits HIF expression via PHD-dependent degradation [20,32]. Table 1. Molecular profile of tumor biopsies. The data for HIFs and VEGF were generated by our laboratory [20,22], while others are from published reports [13,63].
Incidence of HIF-α and PHDs Protein Expression in Primary Human ccRCC, Head & neck (H/N) and Colorectal Cancer (CRC) Tumor
Our laboratory was the first to report that constitutively expressed HIF1α and HIF2α (Table 1, Figure 2) are selenium targets (adopted from References [20,32]). The data in Figure 2 show that the inhibition of constitutively expressed HIF1α and HIF2α in RC2 and 786.0 Clear-cell RCC cells, and HIF1α in FaDu head and neck [32], A548 lung carcinoma cells, and HT29 colorectal tumor cells is selenium dose-dependent and independent of the disease site/cell type. Unlike other HIF-targeting agents, selenium inhibits HIF expression via PHD-dependent degradation [20,32]. Table 1. Molecular profile of tumor biopsies.
Tumor Vasculature
To accommodate survival, growth, and metastasis, tumor cells promote the formation and development of new blood vessels [36,39]. Tumor-associated blood vessels within the tumor microenvironment are unstable and leaky, and they could represent a barrier to the delivery of effective therapies to tumor cells [67,68]. Thus, for the development of efficacious therapy, treatment should include drugs targeting biomarkers that induce the normalization of tumor-associated vasculature. Our laboratory was the first to report that the stabilization of tumor vasculature by MSC is dose-and schedule-dependent. We previously reported that the therapeutic dose and schedule of MSC/SLM exert dual effects. Firstly, anti-angiogenic effects were achieved via the inhibition of new vessel formation and a reduction in microvessel density. Secondly, tumor vascular maturation was achieved through an increase in pericyte recruitment. Collectively, these effects were associated with an increase in drug delivery and distribution to tumor cells. As shown in Figure 3, in vivo treatment with therapeutic doses of MSC resulted in a selective increase in vascular maturation index in tumors, but not in normal liver mouse tissues. The data generated demonstrate that tumor cells and their associated vasculature can be successfully and selectively modulated in vivo by a therapeutic, non-toxic dose and schedule of MSC. These results are consistent with the data generated by Jain et al., demonstrating normalization of the tumor microenvironment by Avastin, an anti-angiogenic agent [69].
Tumor Vasculature
To accommodate survival, growth, and metastasis, tumor cells promote the formation and development of new blood vessels [36,39]. Tumor-associated blood vessels within the tumor microenvironment are unstable and leaky, and they could represent a barrier to the delivery of effective therapies to tumor cells [67,68]. Thus, for the development of efficacious therapy, treatment should include drugs targeting biomarkers that induce the normalization of tumor-associated vasculature. Our laboratory was the first to report that the stabilization of tumor vasculature by MSC is dose-and schedule-dependent. We previously reported that the therapeutic dose and schedule of MSC/SLM exert dual effects. Firstly, anti-angiogenic effects were achieved via the inhibition of new vessel formation and a reduction in microvessel density. Secondly, tumor vascular maturation was achieved through an increase in pericyte recruitment. Collectively, these effects were associated with an increase in drug delivery and distribution to tumor cells. As shown in Figure 3, in vivo treatment with therapeutic doses of MSC resulted in a selective increase in vascular maturation index in tumors, but not in normal liver mouse tissues. The data generated demonstrate that tumor cells and their associated vasculature can be successfully and selectively modulated in vivo by a therapeutic, non-toxic dose and schedule of MSC. These results are consistent with the data generated by Jain et al., demonstrating normalization of the tumor microenvironment by Avastin, an anti-angiogenic agent [69].
To identify a possible link between HIF-α protein expression levels and tumor-associated miRNAs, three primary ccRCC biopsies and two ccRCC cell lines expressing a similar incidence and distribution of HIF-α were analyzed using a microarray for miRNA expression. Microarray analysis using an Exiqon microarray chip of RC2 cells treated with methylselenic acid (MSA), an inhibitor of HIF1α, revealed that 28 miRNAs were downregulated and 12 miRNAs were upregulated ( Figure 4A). Although several miRNAs were altered, selected miRNAs which were upregulated and downregulated by MSA treatment are shown in Figure 4B. These results suggest that these miRNAs are likely regulated by HIF1α and can be effectively modulated by therapeutic doses of selenium.
To identify a possible link between HIF-α protein expression levels and tumor-associated miRNAs, three primary ccRCC biopsies and two ccRCC cell lines expressing a similar incidence and distribution of HIF-α were analyzed using a microarray for miRNA expression. Microarray analysis using an Exiqon microarray chip of RC2 cells treated with methylselenic acid (MSA), an inhibitor of HIF1α, revealed that 28 miRNAs were downregulated and 12 miRNAs were upregulated ( Figure 4A). Although several miRNAs were altered, selected miRNAs which were upregulated and downregulated by MSA treatment are shown in Figure 4B. These results suggest that these miRNAs are likely regulated by HIF1α and can be effectively modulated by therapeutic doses of selenium. The data in Figure 5 indicate that the miRNAs that were significantly altered by MSA treatment of RC2 cells expressing HIF1α and of 786.0 cells expressing HIF2α were also altered in primary ccRCC biopsies. The data in Figure 5 indicate that the miRNAs that were significantly altered by MSA treatment of RC2 cells expressing HIF1α and of 786.0 cells expressing HIF2α were also altered in primary ccRCC biopsies. The data in Figure 5 indicate that the miRNAs that were significantly altered by MSA treatment of RC2 cells expressing HIF1α and of 786.0 cells expressing HIF2α were also altered in primary ccRCC biopsies. Two miRNAs, Let-7b, and -328, which were upregulated, and miRNA-106b, -155, and -210, which were downregulated by MSA treatment of RC2 and 786.0 cells, were randomly selected to perform qRT-PCR analysis along with four primary ccRCC tumor biopsies and their paired normal kidney cells.
The results presented in Figure 5 confirmed the microarray data that these selected miRNAs which were altered in RC2 and 786.0 cells were similarly altered in the patient biopsies, and their expressions could be modulated in vitro and in vivo by selenium. Collectively, the data generated demonstrate that a defined dose and schedule of selenium can effectively modulate the expression levels of specific oncogenic and tumor-suppressor miRNAs altered in ccRCC tumor cells.
Nude Mice Bearing HIF1α
The data in Figure 6A demonstrate the antitumor activity of MSC in sequential combination with two representative cytotoxic drugs, irinotecan (an approved drug for the treatment of colorectal cancer) and docetaxel (used in head-and-neck cancers among others), and radiation therapy. Oral daily administration of 10 mg/kg/day MSC for seven days prior to and concurrent with the administration of cytotoxic or radiation therapies beginning on day seven was associated with enhanced therapeutic efficacy. Two miRNAs, Let-7b, and -328, which were upregulated, and miRNA-106b, -155, and -210, which were downregulated by MSA treatment of RC2 and 786.0 cells, were randomly selected to perform qRT-PCR analysis along with four primary ccRCC tumor biopsies and their paired normal kidney cells.
The results presented in Figure 5 confirmed the microarray data that these selected miRNAs which were altered in RC2 and 786.0 cells were similarly altered in the patient biopsies, and their expressions could be modulated in vitro and in vivo by selenium. Collectively, the data generated demonstrate that a defined dose and schedule of selenium can effectively modulate the expression levels of specific oncogenic and tumor-suppressor miRNAs altered in ccRCC tumor cells.
Nude Mice Bearing HIF1α
The data in Figure 6A demonstrate the antitumor activity of MSC in sequential combination with two representative cytotoxic drugs, irinotecan (an approved drug for the treatment of colorectal cancer) and docetaxel (used in head-and-neck cancers among others), and radiation therapy. Oral daily administration of 10 mg/kg/day MSC for seven days prior to and concurrent with the administration of cytotoxic or radiation therapies beginning on day seven was associated with enhanced therapeutic efficacy. Figure 6. Antitumor activity of MSC in combination with irinotecan and docetaxel in nude mice bearing human head-and-neck cancer cells, FaDU and A253 (A), and radiation-treated A549 lung carcinoma (B). MSC was administered orally daily for seven days and concurrently with anticancer therapies administered on day seven [82].
The data in Figure 6B demonstrate the antitumor activity of MSC in sequential combination with radiation therapy of mice bearing A549 lung carcinoma tumors expressing HIF. Collectively, MSC was found to significantly enhance the therapeutic efficacy of chemotherapy and radiation in different human cancer xenografts from different disease sites. The results generated suggest that the action of selenium in tumor cells expressing HIFs is a universal phenomenon, irrespective of the cancer type or disease site. Figure 7A,B depict tumor growth inhibition by MSC, SLM, axitinib, sunitinib, and topotecan. The dose and schedule of MSC and SLM that inhibited HIF exhibited limited but similar tumor growth inhibition. Sunitinib exerted greater antitumor activity than Avastin, axitinib, and topotecan [83]. The order of antitumor activity is sunitinib > Avastin ≥ axitinib > topotecan > MSC or SLM. The The data in Figure 6B demonstrate the antitumor activity of MSC in sequential combination with radiation therapy of mice bearing A549 lung carcinoma tumors expressing HIF. Collectively, MSC was found to significantly enhance the therapeutic efficacy of chemotherapy and radiation in different human cancer xenografts from different disease sites. The results generated suggest that the action of selenium in tumor cells expressing HIFs is a universal phenomenon, irrespective of the cancer type or disease site. Figure 7A,B depict tumor growth inhibition by MSC, SLM, axitinib, sunitinib, and topotecan. The dose and schedule of MSC and SLM that inhibited HIF exhibited limited but similar tumor growth inhibition. Sunitinib exerted greater antitumor activity than Avastin, axitinib, and topotecan [83]. The order of antitumor activity is sunitinib > Avastin ≥ axitinib > topotecan > MSC or SLM. The data in Figure 7C depict the antitumor activity of tyrosine kinase inhibitors (TKIs) that target VEGF/VEGFR, and topotecan alone and in combination with either MSC or SLM. The combination of topotecan and sunitinib in sequential combination with MSC or SLM had the most therapeutic efficacy and achieved long-term and durable responses not observed with these drugs administered individually. The data in Figure 7D indicate that MSC and SLM similarly potentiate the antitumor activity of axitinib, a Food and Drug Administration (FDA)-approved VEGFR-targeting agent for the treatment of relapsed ccRCC patients. The data in Figure 7E confirm that HIFs are a critical therapeutic target of MSC. MSC potentiates the antitumor activity of topotecan, a topoisomerase 1 poison which targets HIF synthesis, as well as that of Avastin, axitinib, and sunitinib, which target VEGF/VEGFR. In comparison, the antitumor activity of irinotecan, a topoisomerase 1 poison with no demonstrable effects on HIF protein expression, was not potentiated by MSC. In this model, S-1 exhibited significant antitumor activity, perhaps due to overexpression of TP. Collectively, the data in Figure 7E indicate that optimal therapeutic benefit was obtained with MSC in sequential combination with topotecan and sunitinib.
Discussion
Clear-cell RCCs and their associated microenvironment express a unique molecular and morphological profile including a variety of tumor-suppressor and oncogenic miRNAs. However, miRNA-155 and miRNA210 are extensively characterized and overexpressed in multiple tumor types [75][76][77][78]. Although VHL may be regulated by multiple biomarkers expressed in tumor cells and their adjacent microenvironment, miRNA-155 and -210 emerged as key modulators of VHL function, and may offer an alternative mechanism for stable expression of HIFs in ccRCC tumors [17,77]. Loss of VHL in ccRCC tumors may mimic the upregulation of HIFs by hypoxia. In recognition of the critical role of VHL in the pathogenesis of ccRCC tumors, efforts are underway to develop anti-VHL chemical agents [84,85]. Similarly, recognizing that HIFs are upregulated by hypoxia-dependent and -independent pathways and that they are critical therapeutic targets, a number of HIF inhibitors are presently under preclinical and clinical development. A recent phase 1 clinical trial of PT2385, a synthetic small-molecule HIF2α antagonist, demonstrated clinical activity in previously treated ccRCC patients [86].
Tumor microarray analysis demonstrated that HIF1α and HIF2α are individually and jointly co-expressed in a majority of primary and metastatic ccRCC biopsies [20]. In addition, it was reported that, although HIF1α and HIF2α are structurally similar, they functionally regulate different target genes in different cell types [25]. Furthermore, under hypoxia, the expression of VEGF is regulated by HIF1α, but not by HIF2α [33]. It is possible that the inhibition of one HIF isoform may induce the activation of the other in support of tumor growth. The data to date suggest that optimal therapeutic benefit may require targeting both HIF1α and HIF2α.
HIFs and PD-L1 are co-expressed in cancer cells. Under hypoxic conditions, HIFs regulate the expression of PD-L1 by binding to the hypoxia response element in the PD-L1 proximal promoter to activate its transcription [42,47]. PD-L1 expression in cancer cells may, therefore, be regulated transcriptionally by HIF and post-transcriptionally by miRNAs. It is likely that effective downregulation of HIFs would lead to the downregulation of PD-L1, resulting in an increased tumor response to subsequent treatment with anti-PD-1/PD-L1 therapies.
Micro RNA-155 and miRNA-210, amongst others, were reported to modulate the tumor microenvironment [74,75], regulate glucose metabolism [87], and target transcription factor E2F2 in ccRCC tumor cells [88]. Neal et al. reported that the VHL/HIF axis regulates the expression of several types of miRNAs in ccRCC tumors, including miRNA155 and miRNA-210 [53]. Increasing evidence suggests that oncogenic miRNA-155 and miRNA-210 are regulators of immune response biomarkers, including forkhead box P3 (FoxP3) regulatory T cell, myeloid-derived suppressor cell (MDSC) T-cells, and immune checkpoint PD-1/PD-L1 [56,80,81,89,90]. Despite the progress made in our understanding of the biology and therapeutic potential of miRNAs, their clinical use as a prognostic and as a predictor of therapeutic outcome is yet to be determined. Efforts to develop miRNA inhibitors fall short of clinical expectations [91][92][93]. The limited clinical benefits were attributed, in part, to their limited bioavailability, instability, and dose-limiting toxicities, in addition to an inability to demonstrate in vivo modulation of expression of intended targets. Our laboratory was the first to demonstrate that specific types, doses, and a schedule of MSC in ccRCC xenograft models can selectively modulate specific types of miRNAs.
Clear-cell RCC tumors are highly vascular with clear, large cytoplasms expressing perilipin 2, hypoxia-inducible lipid-droplet protein 2, which represses fatty-acid metabolism, and is a target gene of HIF1α [22,64,65]. Molecularly, the majority of ccRCC tumors express high incidence and intensity of HIF1α, HIF2α, and oncogenic miRNA-155 and -210, which target genes involved in ccRCC tumorigenesis, including VEGF and PD-L. The tumor microenvironment associated with ccRCC is leaky and unstable, expressing the common biomarkers that regulate tumor cell growth and metastasis commonly seen in many cancers. Thus, ccRCC tumors provide the opportunity to test the hypothesis and rationale for a mechanism-based treatment combination with selenium that may offer the potential for the development of novel treatment in patients with ccRCC and other cancers with similar expression of Se targets.
Resistance and dose-limiting toxicities continue to represent major clinical challenges for both cytotoxic chemotherapy and biological targeted therapies. In general, in vivo resistance is regulated by multiple molecular and immunological biomarkers expressed in tumor cells and their surrounding microenvironment. These two tumor compartments are functionally interactive. The tumor microenvironment could promote tumor growth while impeding optimal drug delivery and the distribution of effective tumor drug concentrations. Thus, the tumor microenvironment may be considered as the gatekeeper, while tumor cells are the ultimate targets. In order to achieve durable antitumor activity, treatment should include a combination of drugs that enable targeting both the tumor microenvironment and the tumor cells.
In ccRCC, HIFs, miRNA-155, and miRNA-210 are commonly co-expressed and were reported earlier to regulate the expression of gene targets implicated in enhanced angiogenesis, tumor metastasis, and resistance. While considerable efforts are underway to develop miRNA-and HIF-based strategies, in vivo toxicity, tumor instability, and limited drug delivery in effective concentrations continue to plague efforts to have a more clinically effective outcome [93]. In addition, an increased activation of 5-FU prodrugs by TP should result in increased antitumor activity [94][95][96].
During the last several years, our laboratory determined that SLM, an FDA-approved drug for clinical trials, and MSC (under development) exert several effects that are not shared by other selenium compounds and HIF-targeting compounds that are currently under preclinical and clinical evaluation [20,23,70,[83][84][85][86][87][88][89][90]. We were the first to demonstrate [97,98], in several tumor xenograft models, that (1) therapeutic and nontoxic doses and a schedule of organic selenium compounds, SLM and MSC, potently enhance constitutively expressed HIF1α and HIF2α degradation; (2) SLM and MSC downregulate VEGF, which is regulated by HIF1α, but not by HIF2α; (3) SLM and MSC stabilize tumor vasculature resulting in the selective enhancement of drug delivery to tumor cells, consistent with results reported by Jain [69]; (4) SLM and MSC modulate the expression of a number of tumor-suppressor and oncogenic miRNAs altered in ccRCC tumors; (5) SLM and MSC offer selective protection against toxicity induced by toxic and often lethal doses of cytotoxic drugs in preclinical models [83]; and (6) treatment with MSC and SLM was associated with significant enhancement of the efficacy and selectivity of anticancer therapies in head-and-neck, colorectal, and renal cancer xenografts. The antitumor activity of VEFG/VEGFR-targeted therapies alone and in combination with topotecan and S-1 can be further enhanced by MSC in mice bearing VHL-deficient 786.0 ccRCC tumors expressing HIF2α, VEGF, miRNA-155, and miRNA-210. Taken together, non-toxic doses of selenium may offer the potential for the development of novel therapeutic modality. Chart 1 is an outline of the approach used in the translational development of selenium in combination with anticancer drugs in preclinical models to phase 1 and 2 clinical trials. The data generated in several xenograft models provided the rationale for the development of a phase 1 clinical trial in ccRCC patients. The aim was to confirm that the SLM dose used to yield blood selenium concentrations similar to those determined therapeutically, synergistic with anticancer drugs in the preclinical model, could be achieved clinically without toxicity. The optimal SLM dose defined in the phase 1 trial [99] was used to design a phase 2 trial of SLM in sequential combination with axitinib, aimed at assessing the efficacy and modulation of relevant molecular correlates.
Resistance and dose-limiting toxicities continue to represent major clinical challenges for both cytotoxic chemotherapy and biological targeted therapies. In general, in vivo resistance is regulated by multiple molecular and immunological biomarkers expressed in tumor cells and their surrounding microenvironment. These two tumor compartments are functionally interactive. The tumor microenvironment could promote tumor growth while impeding optimal drug delivery and the distribution of effective tumor drug concentrations. Thus, the tumor microenvironment may be considered as the gatekeeper, while tumor cells are the ultimate targets. In order to achieve durable antitumor activity, treatment should include a combination of drugs that enable targeting both the tumor microenvironment and the tumor cells.
In ccRCC, HIFs, miRNA-155, and miRNA-210 are commonly co-expressed and were reported earlier to regulate the expression of gene targets implicated in enhanced angiogenesis, tumor metastasis, and resistance. While considerable efforts are underway to develop miRNA-and HIF-based strategies, in vivo toxicity, tumor instability, and limited drug delivery in effective concentrations continue to plague efforts to have a more clinically effective outcome [93]. In addition, an increased activation of 5-FU prodrugs by TP should result in increased antitumor activity [94][95][96].
During the last several years, our laboratory determined that SLM, an FDA-approved drug for clinical trials, and MSC (under development) exert several effects that are not shared by other selenium compounds and HIF-targeting compounds that are currently under preclinical and clinical evaluation [20,23,70,[83][84][85][86][87][88][89][90]. We were the first to demonstrate [97,98], in several tumor xenograft models, that (1) therapeutic and nontoxic doses and a schedule of organic selenium compounds, SLM and MSC, potently enhance constitutively expressed HIF1α and HIF2α degradation; (2) SLM and MSC downregulate VEGF, which is regulated by HIF1α, but not by HIF2α; (3) SLM and MSC stabilize tumor vasculature resulting in the selective enhancement of drug delivery to tumor cells, consistent with results reported by Jain [69]; (4) SLM and MSC modulate the expression of a number of tumor-suppressor and oncogenic miRNAs altered in ccRCC tumors; (5) SLM and MSC offer selective protection against toxicity induced by toxic and often lethal doses of cytotoxic drugs in preclinical models [83]; and (6) treatment with MSC and SLM was associated with significant enhancement of the efficacy and selectivity of anticancer therapies in head-and-neck, colorectal, and renal cancer xenografts. The antitumor activity of VEFG/VEGFR-targeted therapies alone and in combination with topotecan and S-1 can be further enhanced by MSC in mice bearing VHL-deficient 786.0 ccRCC tumors expressing HIF2α, VEGF, miRNA-155, and miRNA-210. Taken together, non-toxic doses of selenium may offer the potential for the development of novel therapeutic modality. Chart 1 is an outline of the approach used in the translational development of selenium in combination with anticancer drugs in preclinical models to phase 1 and 2 clinical trials. The data generated in several xenograft models provided the rationale for the development of a phase 1 clinical trial in ccRCC patients. The aim was to confirm that the SLM dose used to yield blood selenium concentrations similar to those determined therapeutically, synergistic with anticancer drugs in the preclinical model, could be achieved clinically without toxicity. The optimal SLM dose defined in the phase 1 trial [99] was used to design a phase 2 trial of SLM in sequential combination with axitinib, aimed at assessing the efficacy and modulation of relevant molecular correlates. Based on the preclinical results generated, a mechanism-based combination therapy is proposed, as outlined in Chart 2. In order to achieve optimal therapeutic benefit with the proposed mechanism-based drug combination, the dose, schedule, and sequence of MSC and SLM are critical parameters. Pretreatment with selenium prior to and concurrent with the administration of anticancer therapy is necessary for the optimal modulation of relevant selenium biomarkers in tumor cells and for the optimal stabilization of tumor vasculature. To maintain the optimal and sustained inhibition of HIFs and associated gene targets, it is recommended that topotecan be administered in combination with MSC or SLM. Since therapeutic doses and the schedule of selenium partially downregulate the expression levels of VEGF in tumor cells expressing HIF1α but not HIF2α [20,23], we propose, therefore, adding TKI inhibitors to the combination regimen in order for maximum downregulation of VEGF/VEGFR. This proposed mechanism-based combination was evaluated in 786.0 xenografts and was determined to be highly selective and therapeutically effective. The dose and schedule of the SLM/MSC used were selected based on their molecularly effective dose instead of the maximum tolerated dose. Furthermore, since the expression level of PD-L1 is regulated by HIFs and miRNAs, it is reasonable to expect that SLM/MSC will also modulate the therapeutic efficacy of checkpoint inhibitors. Proof of principle in ccRCC could provide the basis for the verification of this mechanism-based treatment combination in other tumors expressing these molecular targets similarly affected by SLM/MSC. Based on the preclinical results generated, a mechanism-based combination therapy is proposed, as outlined in Chart 2. In order to achieve optimal therapeutic benefit with the proposed mechanism-based drug combination, the dose, schedule, and sequence of MSC and SLM are critical parameters. Pretreatment with selenium prior to and concurrent with the administration of anticancer therapy is necessary for the optimal modulation of relevant selenium biomarkers in tumor cells and for the optimal stabilization of tumor vasculature. To maintain the optimal and sustained inhibition of HIFs and associated gene targets, it is recommended that topotecan be administered in combination with MSC or SLM. Since therapeutic doses and the schedule of selenium partially downregulate the expression levels of VEGF in tumor cells expressing HIF1α but not HIF2α [20,23], we propose, therefore, adding TKI inhibitors to the combination regimen in order for maximum downregulation of VEGF/VEGFR. This proposed mechanism-based combination was evaluated in 786.0 xenografts and was determined to be highly selective and therapeutically effective. The dose and schedule of the SLM/MSC used were selected based on their molecularly effective dose instead of the maximum tolerated dose. Furthermore, since the expression level of PD-L1 is regulated by HIFs and miRNAs, it is reasonable to expect that SLM/MSC will also modulate the therapeutic efficacy of checkpoint inhibitors. Proof of principle in ccRCC could provide the basis for the verification of this mechanism-based treatment combination in other tumors expressing these molecular targets similarly affected by SLM/MSC.
Conclusions and Future Perspectives
The aim of this paper was to determine that the levels of specific biomarkers altered in the majority of ccRCC tumors, such as HIFs, oncogenic miRNA-155 and miRNA-210, and VEGF, can be selectively downregulated by therapeutic nontoxic doses and a schedule of MSC and SLM. In addition, the aim was also to confirm that downregulation of these biomarkers would translate into therapeutic synergy with anticancer therapies. The results in several xenograft models and with multiple cytotoxic and biologic agents demonstrated that the dose-and time-dependent downregulation of constitutively expressed HIFs, miRNA-155 and -210, and VEGF-A by selenium was associated with enhanced therapeutic efficacy and selectivity of anticancer therapies. Preclinical data generated provided the rationale for the development of a phase 1 clinical trial in ccRCC patients treated with escalating doses of SLM in sequential combination with a fixed dose of axitinib Chart 2. Schematic representation of targetable markers expressed in ccRCC. Methylselenocysteine (MSC) targets hypoxia-inducible factors (HIFs) and micro RNAs (miRNAs). Topotecan targets HIF synthesis, while tyrosine kinase inhibitors (TKIs) target vascular endothelial growth factor (VEGF)/VEGF receptor (VEGFR) and 5-fluorouracil (5-FU) prodrugs are the substrate for activation by thymidine phosphorylase.
Conclusions and Future Perspectives
The aim of this paper was to determine that the levels of specific biomarkers altered in the majority of ccRCC tumors, such as HIFs, oncogenic miRNA-155 and miRNA-210, and VEGF, can be selectively downregulated by therapeutic nontoxic doses and a schedule of MSC and SLM. In addition, the aim was also to confirm that downregulation of these biomarkers would translate into therapeutic synergy with anticancer therapies. The results in several xenograft models and with multiple cytotoxic and biologic agents demonstrated that the dose-and time-dependent downregulation of constitutively expressed HIFs, miRNA-155 and -210, and VEGF-A by selenium was associated with enhanced therapeutic efficacy and selectivity of anticancer therapies. Preclinical data generated provided the rationale for the development of a phase 1 clinical trial in ccRCC patients treated with escalating doses of SLM in sequential combination with a fixed dose of axitinib [99,100]. Unlike the 200 µg/day SLM dose used in prevention clinical trials, the SLM doses used in combination therapy were 10 mg/kg in nude mice, and 8000 µg/day in ccRCC patients, which was the dose recommended for the ongoing phase 2 clinical trial for efficacy assessment and for the monitoring of the effects of SLM on relevant biomarkers. The plasma selenium concentrations achieved clinically with the recommended SLM dose were comparable with those achieved with SLM doses determined therapeutically synergistic with anticancer drugs in preclinical models. The mechanism-based drug combination proposed in Chart 2 warrants expanded preclinical investigation and clinical verification. Proof of concept that enhanced therapeutic efficacy and selectivity of axitinib in refractory ccRCC patients are SLM dose-and schedule-dependent will be highly innovative and significant. Furthermore, the ability of selenium to downregulate specific biomarkers associated with drug resistance may provide the opportunity for the clinical development of SLM in sequential combination with other clinically available targeted therapies.
Cell Culture and Drug Treatments
Clear-cell RCC cell lines 786.0 and RC2 were cultured in Rosewell Memorial Park Institute (RMPI-1640) medium with 10% fetal bovine serum (FBS) and 1% penicillin/streptomycin (PenStrep, Sigma-Aldrich, St. Louis, MO, USA) at 37 • C in an incubator with 5% CO 2 . Cells were routinely tested for mycoplasma contamination. Cells were seeded in T75 and/or T150 flasks, and were allowed to grow overnight. Cells were treated with MSA for 24 to 48 h, and were processed to isolate total RNA. Untreated control cells were maintained without treatment.
Animals
Female athymic nude mice (Envigo, nu/nu, 20-25 g body weight), 8-12 weeks of age, were used for the tumor xenograft experiment as previously described [97]. All studies were carried out as approved by the Institutional Roswell Park Comprehensive Cancer Center Animal Care and Use Committee (207M, 2009).
Tumor Xenografts
Clear-cell RCC 786.0 cells were cultured in RMPI-1640 and transplanted into nude mice to establish xenografts. Tumors were harvested, and~50 mg of non-necrotic tumor tissue was transplanted into nude mice and randomized to groups of 5-10 mice each. Treatment with drugs alone or in combination was started when tumors reached~200 mg, and the tumor volume and response were measured as described previously [97]. Drug toxicity was evaluated by measuring the weight loss of the mice biweekly.
Drugs
MSC and SLM (Sigma-Aldrich, St. Louis, MO, USA) were given at 0.2 mg/kg for 35 days starting seven days prior to the start of drug treatment. Axitinib (AdooQ Bioscience, Irvine, CA, USA), sunitinib (LC laboratories, Woburn, MA, USA), and topotecan (Selleckchem, Houston, TX, USA) were administered orally at 25 mg/kg, 80 mg/kg, or 2 mg/kg five days per week for four weeks, either as a single drug or in combination. Avastin (Genentech, South San Francisco, CA, USA), was given at 5 mg/kg via intraperitoneal injection for five days/week for four weeks either by itself or in combination with selenium.
Total RNA Isolation from ccRCC Cells Treated with and without MSA
Cells were treated with MSA for 24-48 h and processed for isolation of total RNA using Trizol reagent as per the instructions of the manufacturer (Invitrogen, Liverpool, NY, USA). RNA quantity and quality was measured using Nanodrop (Thermo-Fisher Scientific, Liverpool, NY, USA), and then used for microRNA microarray analysis and quantitative PCR analysis of microRNA.
Total RNA from ccRCC Patient Tumors and Their Matched Normal Tissues
Total RNA of de-identified ccRCC patient tumors and their matched normal kidneys were obtained from the RPCI Pathology core facility. RNA samples were isolated using Trizol reagent (Thermo-Fisher Scientific, Liverpool, NY, USA) from the non-necrotic tissues selected by the pathologist, and purity was determined before use for detecting microRNA expression by qRT-PCR.
Reverse Transcription (RT) and miRNA qPCR
Complementary DNA (cDNA) was prepared using the following quantities of each reagent and RNA: 4 µL (20 ng) of RNA, 9 µL of H 2 O, 1 µL of Spike-In, 4 µL of reverse transcription (RT) buffer, and 2 µL of enzyme in a total volume of 20 µL. Immediately after the RT reaction was finished, a 1:80 dilution was made on the cDNA, and ROX was added. The reaction mix for qRT-PCR was prepared using 400 µL of SYBR ® Green Master Mix (Thermo-Fisher Scientific, Liverpool, NY, USA) and 320 µL of cDNA (from the above diluted RT reaction). Then, 9 µL of this mix was added to a 384-well plate pre-loaded with specific miR primers in triplicate using an electronic multichannel pipette. Plates were sealed with optical tape and shaken on a plate shaker for 30 s, before being centrifuged for one minute and run on the ABI7900 qPCR machine (Applied Biosystem, Foster City, CA, USA). Quantitative PCR machine cycling conditions and parameters were set exactly the same for every plate.
Normalization of Exiqon miRNA Panels (http://www.exiqon.com/mirna-pcr-panels) Excerpt from Exiqon Manual: Inter-Plate Calibrator (IPC). Since each assay was present only once on each plate, replicates were performed using separate plates. This raises the issue of run-to-run differences. To allow for simple inter-plate calibration, we designed a calibration assay with an accompanying template (annotated as UniSp3 or IPC in the plate layout files). Three wells were assigned for inter-plate calibration to provide triplicate values with the possibility for outlier removal. In each of these wells, both the primers and the DNA template were present, giving high reproducibility. The inter-plate calibrator requires only the addition of the SYBR ® Green master mix in order to give a signal and can, therefore, be used for quality control of each plate run.
GenEx Software (ver 6.1, Thermo-Fisher Scientific, Liverpool, NY, USA: http://www.exiqon.com/ qpcr-software. Plates were imported into the GenEx software (ver 6.1, Thermo-Fisher Scientific, Liverpool, NY, USA) and the IPCs (in triplicate on each plate) were used to normalize the plates helping to eliminate run-to-run variation when comparing multiple plates. All Ct values above 38 were set to 38 as the maximum value (this is arbitrary and may even be left blank to denoted non-amplification). All miRNAs were listed in an excel file regardless of whether or not they were expressed in the samples, with normalized Ct values for each sample. Data were represented as individual triplicate runs and as averages of triplicates (with outliers excluded). Expressions of miRNA were normalized to untreated controls, and fold changes with the selenium treatment were determined. In ccRCC patient tumors, microRNA expression was normalized to normal tissue and fold changes were determined.
Conflicts of Interest:
The authors declare no conflicts of interest. | 2018-11-15T08:55:05.777Z | 2018-10-29T00:00:00.000 | {
"year": 2018,
"sha1": "1807874a13d9d8f70a4730f86bd7e6e16651c8fb",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1422-0067/19/11/3378/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1807874a13d9d8f70a4730f86bd7e6e16651c8fb",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
244791957 | pes2o/s2orc | v3-fos-license | How to Reach the New Green Deal Targets: Analysing the Necessary Burden Sharing within the EU Using a Multi-Model Approach
: The Green Deal of the European Union defines extremely ambitious climate targets for 2030 ( − 55% emissions compared to 1990) and 2050 ( − 100%), which go far beyond the current goals that the EU member states have agreed on thus far. The question of which sectors contribute how much has already been discussed, but is far from decided, while the question of which countries shoulder how much of the tightened reduction targets has hardly been discussed. We want to contribute significantly to answering these policy questions by analysing the necessary burden sharing within the EU from both an energy system and an overall macroeconomic perspective. For this purpose, we use the energy system model TIMES PanEU and the computational general equilibrium model NEWAGE. Our results show that excessively strong targets for the Emission Trading System (ETS) in 2030 are not system-optimal for achieving the 55% overall target, reductions should be made in such a way that an emissions budget ratio of 39 (ETS sector) to 61 (Non-ETS sector) results. Economically weaker regions would have to reduce their CO 2 emissions until 2030 by up to 33% on top of the currently decided targets in the Effort Sharing Regulation, which leads to higher energy system costs as well as losses in gross domestic product (GDP). Depending on the policy scenario applied, GDP losses in the range of − 0.79% and − 1.95% relative to baseline can be found for single EU regions. In the long-term, an equally strict mitigation regime for all countries in 2050 is not optimal from a system perspective; total system costs would be higher by 1.5%. Instead, some countries should generate negative net emissions to compensate for non-mitigable residual emissions from other countries.
Introduction
The Green Deal [1] of the European Union (EU) specifies a very ambitious reduction in greenhouse gas (GHG) emissions by 2030 (−55% compared to 1990), while complete climate neutrality in the EU is to be achieved by 2050.
The discrepancy between current targets for 2030 established in the European Union's Effort Sharing Regulation (ESR) and the new 2030 target imposed by the Green Deal raises the question of how and, more importantly, by whom this gap is to be closed.
On one hand, it must be discussed how the additional reductions are to be distributed between the European Union Emissions Trading System (ETS) and the Non-ETS sector (also referred to as the ESR sector in the following). While the EU Commission's proposals envisage a relatively balanced ratio of the burdens between the two systems, other studies see the necessity of significantly stronger reduction contributions in the ETS sector.
On the other hand, the question emerges as to which countries will contribute these additional reductions. According to the effort-sharing that has been enacted thus far, the reductions in the Non-ETS sector are not distributed equally among the countries, but take into account "the different capacities of Member States to take action by differentiating targets according to gross domestic product (GDP) per capita across Member States" as well as the "cost-effectiveness for those Member States with an above average GDP per capita" [2].
This leads to the issue of how far this distribution between countries should be adapted, taking into account the new, significantly more ambitious targets under the Green Deal. The focus of this paper is on the tightening of targets for 2030, but the long-term mitigation burdens that will result from achieving climate neutrality in 2050 are also addressed.
As shown in Section 2, we could not find any modelling study concerning the Green Deal that addresses burden-sharing between EU regions, neither with energy system models, nor with general equilibrium models. For this reason, we want to obtain new scientific insights with our paper by quantifying the required burden sharing within the EU because of the Green Deal and analysing the resulting economic implications in the European regions with regard to energy system costs and GDP.
Many studies have considered stronger mitigation in the ETS sector to be favourable to achieving the 2030 targets. We aim to contribute to this with TIMES PanEU by shedding light on the optimal distribution of mitigation burden between ETS and Non-ETS sectors from a system perspective.
Furthermore, we use TIMES PanEU to examine the techno-economic requirements for the energy system emerging from the general, EU-wide climate target for 2030. The computable general equilibrium model NEWAGE provides an independent macroeconomic perspective on this target. In this way, we can reach a comprehensive overall view of the Green Deal goal.
With TIMES PanEU, the Green Deal can be studied in terms of its impact on the energy system with a high level of technological detail. Since all sectors are mapped, it is suitable for the investigation of a far-reaching goal such as climate neutrality for the EU. We distinguish ourselves from the existing literature by not only considering the EU as a single entity, but by also mapping and analysing the individual countries of the EU in detail. With this approach, we are able to identify opposing effects of the Green Deal on individual countries or regions in Europe. We want to go into regional depth from an energy system perspective in order to explore the reduction burden of the individual countries and regions more precisely within the EU.
With NEWAGE, we look at macroeconomic developments, which particularly concern burden sharing between countries. NEWAGE does not have the same level of detail and precision in its representation of the energy system, but it takes repercussions between developments in different economic sectors into account, and thereby provides an independent view on the overall economic effects on EU regions.
The three key findings of the paper can be summarized as follows: • An excessively strong focus on mitigation in the ETS sector in 2030 is not cost-optimal. Reductions should be made in a way between the ESR and ETS sectors so that an emissions budget ratio of 61 to 39 results, similar to what the EU Commission also proposes. Nevertheless, the ETS sector always provides the major contribution to emissions reductions in all scenarios until 2030, with −60% emissions in the optimal scenario compared to 2005. • From an energy system perspective, economically weaker countries should reduce their emissions significantly more by 2030 than previously envisaged in the ESR targets to achieve the EU-wide targets at optimal cost. Their respective additional reductions range from 27% to 33%, depending on the regions. However, the macroeconomic studies show the high economic burdens that result from distributing emission budgets according to a gross EU27 + UK optimum, which makes support via compensation measures absolutely necessary. Depending on the policy scenario applied, GDP losses in the range of −0.79% and −1.95% relative to baseline can be found for single EU regions. • To achieve climate neutrality in 2050 for the entire EU, an equally strict mitigation regime for all countries is not optimal from a system perspective, as total annualized system costs would be higher by 1.5% in this case. In particular, countries with large shares of agricultural emissions in their total emissions should not be given excessively strong targets. In contrast, countries with high biomass potentials should generate negative emissions to compensate for these residual emissions.
The rest of this work is structured as follows. In Section 2, we conduct an extensive literature review. Section 3 contains a brief description about the two models used and the general scenario framework. We then conduct our analyses of burden sharing in 2030 in Section 4, while we turn to the long-term analyses for the year 2050 in Section 5. Finally, we provide a short discussion of our results including an outlook for further research in Section 6 and the conclusions in Section 7.
Literature Review
In the literature before 2019, the energy scenario definitions mainly focussed on greenhouse gas reduction goals for Europe between 80 and 85% [3,4] or on analysing the renewable energies integration in the energy system [5][6][7]. There is a wide variety of modelling tools for energy scenario assessments [8], but we focussed on the ones calculating deep decarbonisation scenarios. The existing literature can be further divided into studies analysing the near future [9,10] or the long-term effects. Energy system research, which considers the actual European targets for 2050 or the global target to limit climate change to 1.5 • C, shows a widely varying demand for bioenergy plus carbon capture and storage (BECCS) to reach deep decarbonisation, reflecting the uncertainty of the remaining carbon dioxide (CO 2 ) budgets.
Pietzcker et al. [11] analysed the tightening of the European targets for 2030 and 2050 using the electricity market model LIMES EU. Their analysis showed a faster transformation of the ETS sector with an earlier phase-out of coal and accelerated expansion of renewable energy sources. The more ambitious targets led to the deployment of BECCS above a carbon price level of 100€/t CO 2 Furthermore, they found a rather limited effect of BECCS availability on carbon emissions as well as on carbon and electricity prices.
Luderer et al. [12] used seven integrated assessment models (IAM) to analyse residual CO 2 emissions from fossil fuels under the 1.5 • C goal. They assumed a global budget of 200 Gt CO 2 between 2016 and 2100 to keep global warming below 1.5 • C. According to their research, a significant amount of ca. 800 Gt of residual CO 2 emissions will need to be stored, even under strict mitigation policies. Delayed policy action further increases this amount substantially.
In a study with the energy system model PROMETHEUS, Fragkos evaluated higher CO 2 budgets to limit the global warming to 1.5 • C [13] to account for new budget estimates from the IPCC Special Report on the impacts of global warming of 1.5 • C [14]. The author suggests that, for a CO 2 budget of 860 Gt and depending on the respective scenario assumptions, the quantity of CO 2 captured by BECCS to reach net zero emissions varies between 0 and 205 Gt.
In a study with the model GENeSYS-MOD, the implications of the 2 • C and a 1.5 • C climate scenario on the European energy system were analysed and compared to a businessas-usual scenario [15]. Hainsch et al. concluded that the transformation to renewable energies is mostly market driven, while further decarbonisation requires policy action. BECCS technology is only applied in the 1.5 • C scenario, but the need for carbon capture technologies and their cost-effectiveness might be underestimated because the model does not include some sectors of the economy (e.g., agriculture and a few branches of the industry). A similar study was performed with the PyPSA model, which represents the energy, heat, and transport sector [16]. An early and steady decarbonisation pathway of the European energy system was compared with a late and rapid decarbonisation, while the carbon budget for minimizing climate change to 1.75 • C was applied to both scenarios. They identified the early decarbonisation scenario as the one with the lowest system costs and the energy mix being mainly based on photovoltaic and wind technologies while deployment of BECCS is not required.
In most of the studies, deep decarbonisation requires not only the integration of renewable energies, but also of several hydrogen and negative emission technologies. Sgobbi et al. investigated several hydrogen technologies within the JRC-EU-TIMES model and emphasised its importance to decarbonise the transport and industry sector, gaining even more relevance if negative emission technologies are to be lately deployed [17].
Klein et al. analysed the deployment of biofuelled integrated-gasifiation-combinedcycle (Bio-IGCC) with Carbon Capture & Storage (CCS) technology in the integrated assessment model REMIND and reviewed that the deployment was more sensitive to price changes of the biomass than to different techno-economic parameters of the Bio-IGCC process itself [18]. Davis et al. investigated the limits of BECCS technologies and argued that their potential was not only limited by the geological storage capacity of CO 2 , but also by the required land area, nutrient demand, and water use. Regarding the economic aspects, BECCS seems to be a cost-effective mitigation option for the energy system because the deployment of the BECCS technology can be observed even in less ambitious climate scenarios, but it appears as a risky mitigation option to the authors in comparison to an immediate and strong GHG reduction [19].
In its Impact Assessment on the revised 2030 climate target, the European Commission provides an in-depth analysis of consequences in various aspects [20]. The European Commission employs three modelling tools to evaluate macro-economic effects of the new target under different assumptions. Results from all three modelling tools presented only moderate GDP effects at the EU level. GDP deviation from baseline was below 1% for all models and assumptions, for some slightly negative and for others slightly positive. Regarding sectoral output developments, the results of the JRC-GEM-E3 CGE model showed severe losses in the fossil fuel sectors. Developments in energy-intensive sectors with strong international competition depend heavily on the level of global climate action. Deviations from baseline are usually negative for fragmented action and positive for global action. Observed impacts on employment are generally very limited. However, employment impacts in the fossil fuels sectors are strong, especially in the coal sector.
The European Commission also evaluates several options of future ETS and ESR design. Continuation of the current ETS and ESR scope would require significant tightening of the reduction targets in one or both of the systems. Adjusting the ETS to the revised 2030 target could include an adjustment of the linear reduction factor or a one-off cap-reduction. The Impact Assessment also mentions the option of not strengthening the current ESR targets at all, at the expense of even further tightened ETS targets.
Furthermore, the Impact Assessment discusses in detail a scope extension of the ETS. This approach is subject to several publications (e.g., by Meyer-Ohlendorf and Barth [21]). According to the analysis of the European Commission, extension of the ETS sectoral coverage to buildings and road transport induces lower emission reductions in these sectors. In particular, the transport sectors' response to ETS carbon prices is considered weak due to the already high level of energy and national carbon taxation within Member States.
Current ESR targets were established with respect to GDP per capita within Member States to ensure fairness between higher and lower income countries [2]. The EU Commission also provided an Inception Impact Assessment specifically on the review of the ESR in light of the revised 2030 target and the goal of climate neutrality by 2050 [22]. According to this document, the review of the ESR has four objectives on a more specific level: (...) "incentives for the necessary additional action in the effort sharing sectors should be provided, cost-effective solutions should be promoted, Member States' efforts should be shared in a fair and consistent manner, and coherence with related legislation should be maintained". Recently, the European Commission's proposal for the buildings and transport sector seems to shape up as an inclusion of these sectors in the ETS, possibly in a separated system [23].
Babonneau et al. evaluated the 2016 effort sharing suggestions of the EU commission related to the 80% GHG mitigation target in 2050 related to 1990 [24]. According to their results, application of burden sharing to all sectors is more beneficial in terms of welfare for low income member states, while high income member states benefit from an application of the ETS to all sectors.
We could not find any modelling study concerning the Green Deal that addresses burden-sharing between EU regions, neither with energy system models, nor with general equilibrium models. This paper aims to deepen the understanding of the implications of achieving the 2030 target through a combined exploration of the Green Deal from both an energy system and an economics perspective.
Multiple studies consider stronger mitigation in the ETS sector to be favourable for achieving the 2030 targets. We aim to contribute to this with TIMES PanEU by shedding light on the optimal distribution of mitigation burden between ETS and Non-ETS sectors from a system perspective.
By comparing the literature, one can observe that the need for negative emissions relies on how many sectors of the energy system are covered within the model. While assessments with IAMs opt for the deployment of BECCS technologies, the studies with energy system models covering not all sectors review these technologies as unnecessary. As a full sectoral coverage is crucial for the energy scenario analysis, this work further investigates the significance of negative emissions for the implementation of the Green Deal.
Methodology
For this study, we employed an energy system model, TIMES-PanEU, and the computable general equilibrium (CGE) model NEWAGE. Captured emissions, determined by TIMES, were used as input for NEWAGE. This section further explains the two modelling tools.
TIMES PanEU
Energy system models are a widely used tool for analysing the techno-economic implications of policy imperatives such as emission targets for countries or regions. They have been successfully employed to analyse emission targets in a large number of publications (e.g., [13,15,18,25]) and the European Commission's Joint Research Centre (JRC) uses a different model [26], which was derived from the same model framework utilised in this study.
TIMES PanEU was applied in this study as an energy system model. This section provides a very brief description of the underlying mechanics, basic correlations within the energy system, and information about the assumptions on the costs and potentials that we established for this study. For a more detailed description of the model, please refer to [25,27,28].
The fundamental framework of the model is the reference energy system (RES). As can be seen in Figure 1, the RES maps all energy carriers, technologies, materials, emission flows, and service demands, which are necessary to thoroughly depict the energy system. It covers the complete energy system, beginning with the supply of resources and energy carriers and ending with the fulfilment of the defined demands. Primary energy can be converted into secondary energy before being used as final energy, multiple times, taking into account the respective costs and efficiencies of the conversion steps. The consumption of fossil energy sources causes emissions, whereby the greenhouse gases CO 2 , methane (CH 4 ), and nitrous oxide (N 2 O) were considered in the scope of this study. into account the respective costs and efficiencies of the conversion steps. The consumption of fossil energy sources causes emissions, whereby the greenhouse gases CO2, methane (CH4), and nitrous oxide (N2O) were considered in the scope of this study. Figure 1. TIMES PanEU reference energy system [28].
Technologies are mapped with investment costs, fixed and variable costs, lifetimes and efficiencies, energy carriers can be mined within a region, or imported at specified costs. Ultimately, all costs are included in the system cost function, which is to be minimised.
TIMES PanEU is a linear optimiser that aims to minimise the total discounted system cost in a given timeframe to meet exogenously given service demands ( [29]). It has perfect foresight over the whole modelled time horizon.
Our model covers all of the EU plus the United Kingdom (UK), Norway (NO), and Switzerland (CH), with each country represented as an individual region with its unique energy system. Energy service demands must be met for every region, and interactions between the regions are mapped via trade in electricity, bioenergy as well as emissions.
The model horizon spans from 2010 to 2050, split into periods with a length of five years each. Periods are represented by a single year, with each year being divided into 12 time slices: one day for every season divided into a slice for the day, one for the night, and one peak hour, which covers the time of the day where maximal load occurs.
Additional to the described model structure in [25,27,30], the TIMES PanEU model was further developed to consider the latest technology updates and policy-relevant developments.
Domestic hydrogen production was implemented. In addition to the hydrolysis, biomass and gas gasification options were integrated into the model to produce hydrogen. The techno-economic characteristics of the production technologies were taken from [26]. Hydrogen-fuelled technologies are also defined in each sector. Ammonia production is implemented into the model based on the Haber-Bosch process in the hydrogen and nitrogen [31]. Synfuels are employed to provide additional decarbonisation options, especially in the transport and industry sectors. For these energy carriers, import processes are defined for synthetic gas, synthetic kerosene, synthetic diesel, synthetic fuel oil, and synthetic gasoline. They are implemented as zero emission energy carriers.
Coal phase out commitments of the different Member States were integrated, based on [32]. By considering these discussions, coal and lignite CCS technologies were not defined as investment options for the respective countries. Technologies are mapped with investment costs, fixed and variable costs, lifetimes and efficiencies, energy carriers can be mined within a region, or imported at specified costs. Ultimately, all costs are included in the system cost function, which is to be minimised.
TIMES PanEU is a linear optimiser that aims to minimise the total discounted system cost in a given timeframe to meet exogenously given service demands [29]. It has perfect foresight over the whole modelled time horizon.
Our model covers all of the EU plus the United Kingdom (UK), Norway (NO), and Switzerland (CH), with each country represented as an individual region with its unique energy system. Energy service demands must be met for every region, and interactions between the regions are mapped via trade in electricity, bioenergy as well as emissions.
The model horizon spans from 2010 to 2050, split into periods with a length of five years each. Periods are represented by a single year, with each year being divided into 12 time slices: one day for every season divided into a slice for the day, one for the night, and one peak hour, which covers the time of the day where maximal load occurs.
Additional to the described model structure in [25,27,30], the TIMES PanEU model was further developed to consider the latest technology updates and policy-relevant developments.
Domestic hydrogen production was implemented. In addition to the hydrolysis, biomass and gas gasification options were integrated into the model to produce hydrogen. The techno-economic characteristics of the production technologies were taken from [26]. Hydrogen-fuelled technologies are also defined in each sector. Ammonia production is implemented into the model based on the Haber-Bosch process in the hydrogen and nitrogen [31]. Synfuels are employed to provide additional decarbonisation options, especially in the transport and industry sectors. For these energy carriers, import processes are defined for synthetic gas, synthetic kerosene, synthetic diesel, synthetic fuel oil, and synthetic gasoline. They are implemented as zero emission energy carriers.
Coal phase out commitments of the different Member States were integrated, based on [32]. By considering these discussions, coal and lignite CCS technologies were not defined as investment options for the respective countries.
The technological option to generate negative emissions was given to the model via the combination of electricity generation from biomass with downstream CCS, further referred to as BECCS. Biomass potential for every country was taken from [33]. Across the analysis, high biomass potential curves were integrated to the existing model structure. Renewable energy potentials were based on ENSRPESO [34] and livestock demand was reduced by 50% until 2050, following the AT-Kearney Study [35]. However, this leaves residual emissions (especially CH 4 ) that can be reduced by (costly and complex) technical measures, but not completely. A base amount of about 25% of agricultural emissions cannot be reduced.
The GHG abatement options for the process emissions in the agriculture sector were derived from [36]. Import prices for fossil fuels were taken from the "Sustainable Development Scenario" in [37].
NEWAGE
For the analysis of the macroeconomic effects of different scenarios, we employed the CGE model NEWAGE (Model website as of 25.11.2021: https://www.ier.uni-stuttgart.de/ en/research/models/NEWAGE/, accessed on 21 Novemeber 2021). CGE models are a well-established class of macroeconomic models already used previously in the evaluation of current EU effort sharing [24]. One of the models used in the European Commission's Impact Assessment is also a CGE model (JRC-GEM-E3) [38].
NEWAGE's representation of the energy sector is not as precise as TIMES Pan-EU's. However, NEWAGE takes income and demand effects into account. Economy is modelled in a "closed loop" with its interconnections between consumers and industry sectors. Moreover, NEWAGE covers not only the EU, but the whole world. Therefore, it facilitates the analysis of repercussions of energy-related policy decisions in the worldwide economy. In the following, a brief overview on the basic features of the model is given.
NEWAGE is applied in a recursive-dynamic manner and does not have foresight. The base year is 2011, followed by 2015, and further five-year time steps until 2050. Production of goods and services is split into 23 sectors. Underlying trade data are taken from the GTAP 9 [39] and EXIOBASE 3 [40] databases. Production is modelled with Constant Elasticity of Substitution (CES) functions. A special feature of NEWAGE is the representation of the electricity sector with 18 different electricity generation technologies. In its current version, NEWAGE includes not the full spectrum of greenhouse gas emissions, but only energetic CO 2 emissions. For more details on the structure of production and electricity generation in NEWAGE, see Appendix B.
In NEWAGE, the world is represented by 18 regions. Some large countries make up a region by themselves, but most countries are aggregated into regions. For example, the Scandinavian and Baltic countries are combined with Ireland in the Northern EU region. Austria, Czech Republic, Hungary, Slovakia, Slovenia, Croatia, Romania, Bulgaria, Greece, Cyprus, and Malta make up the south-eastern EU region (see Figure 2 below).
Scenario Framework
For this study, we examined three main scenarios. All three scenarios were based on the goals of the EU Commission's Green Deal (i.e., 100% greenhouse gas neutrality for the NEWAGE calculations consider trade flows among world regions. In all scenarios calculated for this article, prices of fossil fuels in the regions with the largest fossil fuel resources were fixed at values given by the sustainable development scenario of IEA [37].
Scenario Framework
For this study, we examined three main scenarios. All three scenarios were based on the goals of the EU Commission's Green Deal (i.e., 100% greenhouse gas neutrality for the entire EU is achieved by 2050).
•
Optimal (OPT): Reaching the primary climate targets without additional restrictions regarding the distribution between ETS and ESR or between the countries exceeding the already agreed distributions. • ETS first: Reaching the climate targets, but with a major contribution from the ETS sector that has to be completely GHG-neutral by 2050. However, there are no restrictions regarding the burden sharing between the countries in the ETS sector. Reductions already agreed upon in the ESR sector will be extrapolated until 2050. • ESR more: Meeting the climate targets, but with a major contribution from the ESR sectors, these must reduce GHG emissions by 95% until 2050 (compared to 2005). However, each country must reduce its emissions by at least 80%. The ETS sector must achieve slightly higher reductions than the optimal scenario.
The precise reduction targets we have specified for each scenario can be found in Table 1. Please note that the targets refer to the EU plus the UK. We included the UK alongside the EU in the targets of the scenarios to ensure comparability with the current Effort Sharing Regulation, which still includes the UK. In NEWAGE, CO 2 reduction goals for regions outside EU + the United Kingdom were derived from the sustainable development scenario of IEA [37].
The country-specific ESR targets can be found in Table A1 in Appendix A.
Optimal Burden Sharing in 2030 to Reach the Goal of 55% Reduction
The objective of the following first part of the results analysis is to provide answers to these two questions: • What mitigation in the ETS and ESR sectors is optimal from a system perspective in 2030? • Which countries or regions should shoulder which burden in 2030?
We examined both issues always under the condition that the 55% reduction target in 2030 for the EU as a whole is achieved.
We begin the analyses of burden sharing with the allocation of the mitigation efforts between the sectors covered by the ETS and the sectors not covered by the ETS, the ESR sectors. We then move on to the burden sharing between the countries of the EU, first from an energy system point of view before supplementing the assessment with a macroeconomic perspective.
Burden Sharing between ETS and ESR Sectors
We conducted the analysis of the optimal allocation of abatements between ETS and ESR by examining the distribution of the 2030 emissions budget between the ETS and ESR sectors. The corresponding budgets for the three scenarios can be found in Figure 3.
•
What mitigation in the ETS and ESR sectors is optimal from a system perspective in 2030? • Which countries or regions should shoulder which burden in 2030?
We examined both issues always under the condition that the 55% reduction target in 2030 for the EU as a whole is achieved.
We begin the analyses of burden sharing with the allocation of the mitigation efforts between the sectors covered by the ETS and the sectors not covered by the ETS, the ESR sectors. We then move on to the burden sharing between the countries of the EU, first from an energy system point of view before supplementing the assessment with a macroeconomic perspective.
Burden Sharing between ETS and ESR Sectors
We conducted the analysis of the optimal allocation of abatements between ETS and ESR by examining the distribution of the 2030 emissions budget between the ETS and ESR sectors. The corresponding budgets for the three scenarios can be found in Figure 3. Our model calculations with TIMES PanEU indicated a share for the ESR sector of the overall GHG emissions of about 61% in the optimal scenario for 2030. Currently, this share is at 57%, and the proposals of the EU Commission would also yield to a similar share of 61% in the medium-term. Hence, this is already the first interesting finding, as other studies [44] have derived significantly higher shares of about 80% for the budget of the ESR sector. Unfortunately, it is not clear from the study cited how the applied model is structured, so we cannot conclusively determine whether these differences are due to different implementations of ETS or ESR in the model. Our model calculations with TIMES PanEU indicated a share for the ESR sector of the overall GHG emissions of about 61% in the optimal scenario for 2030. Currently, this share is at 57%, and the proposals of the EU Commission would also yield to a similar share of 61% in the medium-term. Hence, this is already the first interesting finding, as other studies [44] have derived significantly higher shares of about 80% for the budget of the ESR sector. Unfortunately, it is not clear from the study cited how the applied model is structured, so we cannot conclusively determine whether these differences are due to different implementations of ETS or ESR in the model.
The comparison of the three scenarios showed particularly large deviations in ETS first compared to the other two. In Figure 4, it is evident that the ETS sector mitigated significantly more in relation to 2005, and thus also departed substantially from the ratios in the Optimal and ESR more. However, all three scenarios shared in common that the ETS sector always contributed the greater part of the reductions, ranging from −60% (Optimal) to −78% (ETS first). Even in ESR more, with −58%, ETS sector reductions were greater than the −55% specified in the scenario framework. The comparison of the three scenarios showed particularly large deviations in ETS first compared to the other two. In Figure 4, it is evident that the ETS sector mitigated significantly more in relation to 2005, and thus also departed substantially from the ratios in the Optimal and ESR more. However, all three scenarios shared in common that the ETS sector always contributed the greater part of the reductions, ranging from −60% (Optimal) to −78% (ETS first). Even in ESR more, with −58%, ETS sector reductions were greater than the −55% specified in the scenario framework.
Nevertheless, with a reduction of 44%, the ESR sector made a substantial contribution to the overall emission mitigation in our cost-optimal case. We conclude that a strong focus on the ETS sectors, as the ETS first scenario stipulates, is disadvantageous from a system perspective, which can also be deduced from the annuated system costs in 2030 where ETS first had 2.5% higher costs than the reference scenario compared to 1.3% for the Optimal scenario or 1.5% in ESR more.
We have identified two main reasons for this effect, which we will elaborate in the following: 1. In the optimal case, the building sector can cheaply reduce emissions to a certain amount through district heating; with tough ETS targets, this is limited in the me- Nevertheless, with a reduction of 44%, the ESR sector made a substantial contribution to the overall emission mitigation in our cost-optimal case. We conclude that a strong focus on the ETS sectors, as the ETS first scenario stipulates, is disadvantageous from a system perspective, which can also be deduced from the annuated system costs in 2030 where ETS first had 2.5% higher costs than the reference scenario compared to 1.3% for the Optimal scenario or 1.5% in ESR more.
We have identified two main reasons for this effect, which we will elaborate in the following: 1.
In the optimal case, the building sector can cheaply reduce emissions to a certain amount through district heating; with tough ETS targets, this is limited in the mediumterm.
2.
By burdening the power sector with the ETS targets, we obtain a higher electricity price, which ultimately leads to a decreased use of electricity-based technologies in the ESR sectors, particularly in the transport sector.
The expansion of district heating is a central component of the transformation of the energy system. The share of district heating in the final energy consumption of buildings rose between 2020 and 2030 from approximately 7% to roughly 12% in the Optimal scenario. For the building sector, this technology option represents a good option for reducing emissions in the medium-and long-term, especially as it is largely provided by efficient gas combined heat and power (CHP) plants in the medium-term. In the long-term, the district heating supply is then defossilised by shifting production to large-scale heat pumps, biomass-fired CHP plants, or geothermal energy.
Given a strong focus on the ETS sector, a large part of the gas-fired power plants cannot be operated in 2030 in order to achieve the reduction targets, as the ETS first scenario demonstrates. As can be seen in Figure 5a, this led to a considerably low share of heat in final energy consumption in the building sector, which dropped from around 12% (Optimal) to around 8% (ETS first). It emerges that in the case of ETS first, district heating was predominantly replaced by the direct combustion of gas. As a result, emissions in the ESR sector increased, while in the ETS sectors, emissions decreased by approx. 140 Mt CO2. This "shifting" of emissions ultimately accomplished the ETS goals, but makes little sense from an energy system perspective.
However, the building sector saw only a negligible decline in electricity consumption, which can be explained by the concurrent decrease in district heating. On one hand, the requirements imposed by ETS first increased the marginal costs for the supply of electricity, which in fact made electricity less attractive for the building sector. On the other hand, this effect could be found to a much greater extent in district heating. We therefore observed two effects running in opposite directions: District heating becomes less attractive for the building sector, but this does not trigger a stronger electrification due to the simultaneously rising electricity prices; the gap is filled by fossil energy sources.
As a second in-depth investigation, the impact of the ETS first on the transport sector It emerges that in the case of ETS first, district heating was predominantly replaced by the direct combustion of gas. As a result, emissions in the ESR sector increased, while in the ETS sectors, emissions decreased by approx. 140 Mt CO 2 . This "shifting" of emissions ultimately accomplished the ETS goals, but makes little sense from an energy system perspective.
However, the building sector saw only a negligible decline in electricity consumption, which can be explained by the concurrent decrease in district heating. On one hand, the requirements imposed by ETS first increased the marginal costs for the supply of electricity, which in fact made electricity less attractive for the building sector. On the other hand, this effect could be found to a much greater extent in district heating. We therefore observed two effects running in opposite directions: District heating becomes less attractive for the building sector, but this does not trigger a stronger electrification due to the simultaneously rising electricity prices; the gap is filled by fossil energy sources.
As a second in-depth investigation, the impact of the ETS first on the transport sector will be elaborated by looking again at the final energy consumption in Figure 5b. We can see that in the case of the optimum in 2030, a significant share of the transport sector was already electrified (approx. 15% of final energy). In TIMES PanEU, we assumed cost parity between electric and combustion engines until 2025. In the Optimal scenario, this led to an early electrification of the transport sector, resulting in 33 million fully electrically powered cars in the EU in 2030.
In ETS first, the share of electricity in the final energy consumption dropped to only 11% of the final energy consumption and the 4% difference to the Optimal scenario was completely replaced by fossil fuels. The higher costs of electricity generation due to the high reduction pressure in ETS first led to a decreasing economic appeal of electrical alternatives in transport. Here, just as in the building sector, a non-cost-optimal shift of emissions to the ESR sector takes place.
Overall, ETS first led to a lower electricity consumption of about 100 TWh compared to Optimal and also ESR more, which corresponded to a relative deviation of about 3%. However, this difference occurred almost exclusively in the transport sector; other sectors were affected to a much lesser extent, as is described above for the building sector.
The conclusions of this chapter can thus be summarised as follows: • Although a cost-optimal reduction to achieve the EU targets in 2030 leads to a relatively stronger reduction in the ETS sector, the ESR sector should also make its significant contribution, leading to a ratio of emissions of 39% to 61% (ETS/ESR). • A too heavy focus on reductions in the ETS sector in 2030 leads to two negative effects: first, district heating, which optimally contributes to decarbonisation, is deployed less in the building sector (there was 8% of final energy consumption in ETS first compared to 12% in the Optimal scenario). Second, tightened targets for the power sector lead to higher electricity prices, meaning that electric options are less deployed in the transport sector, which results in higher emissions in that sector.
Burden Sharing between the European Regions in 2030 from an Energy System Perspective
This section analyses the burden sharing between the countries in 2030, which has become necessary through the tightened targets of the Green Deal.
As a first step, a comparison of the needed emission reductions in the EU Member States in the Optimal scenario with the targets for these countries that have been agreed in the ESR thus far was carried out for Non-ETS sectors. On one hand, the aim was to review the targets with regard to their suitability to achieve the 55% target for Europe in 2030. On the other hand, it is to be examined which countries should reduce their Non-ETS sector by how much from a system perspective. The comparison, shown in Figure 6, of the current targets with the Optimal shows that all countries reduced more than they are currently required to do by the Effort Sharing Regulation. The majority of countries even needed to reduce significantly more, so the ESR targets adopted thus far are nowhere near sufficient to achieve the 55% target in 2030.
It is striking that Poland, Romania, and Bulgaria contributed a significantly higher reduction in the Optimal scenario. From a system perspective, countries that are "spared" in the ESR should actually reduce significantly more in the ESR sector. From a system perspective, economically weaker countries should contribute more to mitigation at an early stage than previously envisaged.
2030. On the other hand, it is to be examined which countries should reduce their Non-ETS sector by how much from a system perspective. The comparison, shown in Figure 6, of the current targets with the Optimal shows that all countries reduced more than they are currently required to do by the Effort Sharing Regulation. The majority of countries even needed to reduce significantly more, so the ESR targets adopted thus far are nowhere near sufficient to achieve the 55% target in 2030. It is striking that Poland, Romania, and Bulgaria contributed a significantly higher reduction in the Optimal scenario. From a system perspective, countries that are "spared" in the ESR should actually reduce significantly more in the ESR sector. From a system perspective, economically weaker countries should contribute more to mitigation at an early stage than previously envisaged.
To examine the burden sharing between the countries and regions of Europe more deeply, we continued by determining the total CO2 reductions that the countries will have to provide in 2030 compared to a reference case to achieve the climate targets in this year. In this context, we only evaluated the CO2 emissions to facilitate comparisons with the NEWAGE results. However, the two remaining greenhouse gases were always part of the reduction requirement, regardless of this particular evaluation. To achieve better comparability between the models, we implemented a reference scenario to which we could compare the other scenarios. The scenario was defined as a business-as-usual scenario in which no reductions beyond the already adopted ETS and ESR targets are defined. NEW-AGE does not assume any CO2 reduction targets outside the EU + United Kingdom in this scenario. The results of this analysis are shown in Figure 7. The countries of the EU were To examine the burden sharing between the countries and regions of Europe more deeply, we continued by determining the total CO 2 reductions that the countries will have to provide in 2030 compared to a reference case to achieve the climate targets in this year. In this context, we only evaluated the CO 2 emissions to facilitate comparisons with the NEWAGE results. However, the two remaining greenhouse gases were always part of the reduction requirement, regardless of this particular evaluation. To achieve better comparability between the models, we implemented a reference scenario to which we could compare the other scenarios. The scenario was defined as a business-as-usual scenario in which no reductions beyond the already adopted ETS and ESR targets are defined. NEWAGE does not assume any CO 2 reduction targets outside the EU + United Kingdom in this scenario. The results of this analysis are shown in Figure 7. The countries of the EU were aggregated here according to the NEWAGE standard (see Section 3.2. NEWAGE) in order to be able to subsequently better compare the effects with NEWAGE. Looking at the additionally required reductions in the Optimal scenario compared to the reference scenario, it is apparent that in particular, economically weaker regions such as Poland, Spain, and Portugal, south-eastern EU as well as the rather strong northern EU (which also includes the Baltic countries) had to reduce more in relation to the reference scenario, which is in line with the findings from the comparison between the Optimal and current targets in Figure 6. The economically weaker regions reduced their emissions in the Optimal scenario by −27% up to −33%, compared to the previous targets of the reference scenario, which means that they had to contribute significantly larger reductions than regions such as Germany (−15%), the Benelux countries (−14%), France (−14%), or the Looking at the additionally required reductions in the Optimal scenario compared to the reference scenario, it is apparent that in particular, economically weaker regions such as Poland, Spain, and Portugal, south-eastern EU as well as the rather strong northern EU (which also includes the Baltic countries) had to reduce more in relation to the reference scenario, which is in line with the findings from the comparison between the Optimal and current targets in Figure 6. The economically weaker regions reduced their emissions in the Optimal scenario by −27% up to −33%, compared to the previous targets of the reference scenario, which means that they had to contribute significantly larger reductions than regions such as Germany (−15%), the Benelux countries (−14%), France (−14%), or the United Kingdom (−17%).
By taking the annualised system costs (system costs include all fixed, variable, or investment costs of the technologies as well as the costs for import and distribution of energy carriers) ( Figure 8) as a measure for the burden placed on the countries by the reductions in the three scenarios, we can see the reason for these unequally distributed burdens. Economically stronger countries such as Germany or France coped much better with the additional reductions than, for example, Poland, south-eastern EU, or northern EU. However, this also shows the limits of energy system analysis with regard to the evaluation of political decision-making processes. With the help of TIMES PanEU, it is possible to very precisely analyse which sectors and regions have to bear which reduction burden from a system perspective in order to achieve the overarching reduction target.
TIMES PanEU can only partially analyse the interaction of the economic system with changes in the energy system; impacts on the economic structure from the demand effects are not taken into account. This is where the market-economic analysis with NEWAGE comes into play. NEWAGE has a less detailed depiction of the energy system but includes the economy as a whole in the analysis and can depict feedback effects between economic sectors. The analysis of the 2030 burden sharing within the EU is therefore continued in Section 4.3.
The conclusions of this section can thus be summarised as follows: • For the −55% target in 2030, all countries must contribute significantly more in the ESR sector than agreed under the ESR targets. The economically weaker regions needed to additionally reduce their emissions in the Optimal scenario by −27% up to −33% compared to the current targets, which means that they had to contribute significantly larger relative reductions than regions such as Germany (−15%), the Benelux countries (−14%), France (−14%), or the United Kingdom (−17%). • However, this leads to disproportionately high increases in the system costs of these countries, respectively, regions. If the system targets are to be met, there must either be European compensation mechanisms so that all countries are able to achieve their reductions. Alternatively, an uneven distribution of the reduction burdens can be prescribed for the Non-ETS sector in order to relieve the weaker countries of some of their system costs.
Burden Sharing between the Regions in 2030 from a Macroeconomic Point of View
To complement the analyses carried out with TIMES PanEU by adding an independent macroeconomic perspective, the same scenarios were calculated for 2030. In the following, effects are discussed in comparison to the reference scenario. Therefore, the men- In the Optimal scenario, it is these regions (and Benelux) that had to relatively shoulder the largest increases in system costs. From a system perspective, these regions should mitigate more in order to achieve the set climate targets, but from a political perspective, it is also clear that these regions should be supported in this endeavour. If the system targets are to be achieved, there must either be European compensation mechanisms so that all countries are able to make their reductions, or (as in the ESR more), an uneven distribution of the reduction burden is imposed for the Non-ETS sector in order to relieve the weaker countries of some of their system costs. In this case, however, it may be accepted that the cost-optimal path is not followed.
However, this also shows the limits of energy system analysis with regard to the evaluation of political decision-making processes. With the help of TIMES PanEU, it is possible to very precisely analyse which sectors and regions have to bear which reduction burden from a system perspective in order to achieve the overarching reduction target.
TIMES PanEU can only partially analyse the interaction of the economic system with changes in the energy system; impacts on the economic structure from the demand effects are not taken into account. This is where the market-economic analysis with NEWAGE comes into play. NEWAGE has a less detailed depiction of the energy system but includes the economy as a whole in the analysis and can depict feedback effects between economic sectors. The analysis of the 2030 burden sharing within the EU is therefore continued in Section 4.3.
The conclusions of this section can thus be summarised as follows: • For the −55% target in 2030, all countries must contribute significantly more in the ESR sector than agreed under the ESR targets. The economically weaker regions needed to additionally reduce their emissions in the Optimal scenario by −27% up to −33% compared to the current targets, which means that they had to contribute significantly larger relative reductions than regions such as Germany (−15%), the Benelux countries (−14%), France (−14%), or the United Kingdom (−17%). • However, this leads to disproportionately high increases in the system costs of these countries, respectively, regions. If the system targets are to be met, there must either be European compensation mechanisms so that all countries are able to achieve their reductions. Alternatively, an uneven distribution of the reduction burdens can be prescribed for the Non-ETS sector in order to relieve the weaker countries of some of their system costs.
Burden Sharing between the Regions in 2030 from a Macroeconomic Point of View
To complement the analyses carried out with TIMES PanEU by adding an independent macroeconomic perspective, the same scenarios were calculated for 2030. In the following, effects are discussed in comparison to the reference scenario. Therefore, the mentioned effects occur on top of existing effects originating from the already adopted ETS and ESR targets.
From an EU-wide perspective, all of the three scenarios led to comparable losses in gross domestic product (GDP) in 2030 relative to the reference scenario. The Optimal scenario harmed EU-wide GDP development least (−1.20%). However, the losses in ETS first (−1.41%) exceed those of ESR more (−1.26%) in 2030.
Gross value added (GVA) of the Non-ETS sector showed the highest losses in ETS first and the lowest in ESR more. GVA of the ETS sector was increased and almost on the same level for all of the three scenarios. The main driver behind this positive ETS sector development was electricity production.
EU prices of almost all goods from ETS sectors increased in the three scenarios relative to the reference scenario, with the strongest rise in ETS first. In contrast, EU prices of almost all goods from the Non-ETS sector decreased. ETS industries could successfully impose higher prices, but other industries further down the value chain could not do this and had to carry the burden.
Among the rising prices from the ETS sector in ETS first, electricity prices stand out with very strong increases between 22% and 107%. For all EU regions, electricity prices clearly rose the most in ETS first and the least in ESR more.
Among the Green Deal scenarios, fossil fuel consumption was highest in ETS first and lowest in ESR more. The opposite was true for electricity consumption: it was highest for ESR more and lowest for ETS first. This points to the influence of electricity prices on electrification. If strong mitigation targets are not accompanied by moderate electricity prices, electrification could be impeded.
The regionally disaggregated GDP ( Figure 9) view revealed losses between −0.79% and −1.95%. Gross value added (GVA) of the Non-ETS sector showed the highest losses in ETS first and the lowest in ESR more. GVA of the ETS sector was increased and almost on the same level for all of the three scenarios. The main driver behind this positive ETS sector development was electricity production.
EU prices of almost all goods from ETS sectors increased in the three scenarios relative to the reference scenario, with the strongest rise in ETS first. In contrast, EU prices of almost all goods from the Non-ETS sector decreased. ETS industries could successfully impose higher prices, but other industries further down the value chain could not do this and had to carry the burden.
Among the rising prices from the ETS sector in ETS first, electricity prices stand out with very strong increases between 22% and 107%. For all EU regions, electricity prices clearly rose the most in ETS first and the least in ESR more.
Among the Green Deal scenarios, fossil fuel consumption was highest in ETS first and lowest in ESR more. The opposite was true for electricity consumption: it was highest for ESR more and lowest for ETS first. This points to the influence of electricity prices on electrification. If strong mitigation targets are not accompanied by moderate electricity prices, electrification could be impeded.
The regionally disaggregated GDP ( Figure 9) view revealed losses between −0.79% and −1.95%. Figure 9. Deviation of EU regional GDP in 2030 compared to the reference scenario.
In the Optimal scenario, south-eastern EU faced the strongest relative GDP loss (−1.60%) compared to the other EU regions. Northern EU was affected the least (−0.84%). For no region except the south-eastern EU, Optimal was the worst scenario among the three, but was the best one for the northern EU, France, and UKI. In the Optimal scenario, south-eastern EU faced the strongest relative GDP loss (−1.60%) compared to the other EU regions. Northern EU was affected the least (−0.84%). For no region except the south-eastern EU, Optimal was the worst scenario among the three, but was the best one for the northern EU, France, and UKI.
The Optimal scenario led to more levelled absolute CO 2 prices among the regions than the ETS first and ESR more scenarios. CO 2 abatement relative to the reference scenario was comparably high for the south-eastern EU in all three scenarios. However, compared with the other regions, absolute weighted CO 2 prices in ETS first and ESR were still rather low for the south-eastern EU (see Appendix C). Absolute Non-ETS prices were lowest among EU regions for the south-eastern EU in ETS first and ESR more. The south-eastern EU was "treated with care" in ETS first and ESR more by low Non-ETS prices, but not to the same extent in the Optimal scenario.
Northern EU reached comparably moderate relative abatement under the Optimal scenario prices. The more levelled CO 2 prices helped northern EU. In the Optimal scenario, absolute weighted and Non-ETS CO 2 prices did not exceed those of most other regions as much as in ETS first and in ESR more. Export from northern EU remained strong in the Optimal scenario, and losses of export value relative to the reference scenario were second lowest among the EU regions in this scenario.
Macroeconomic impacts in the ETS first scenario were most harmful for Poland (−1.94%) and least for the northern EU (−1.06%) relative to the reference scenario. From a macroeconomic perspective, it was the worst among the three scenarios for Poland, Benelux, Italy, and Germany, and the best one only for Spain and Portugal.
Poland was the EU region with the highest relative abatement among EU regions in ETS first. ETS abatement translates to high national abatement for Poland as it is the region with the highest GVA share of ETS sectors. However, the ETS sector even experienced above EU27 + UK average relative GVA increase. Poland faced particularly strong relative GVA losses in Non-ETS sub-sectors such as the buildings and the service sector. Relative Non-ETS GVA losses were higher than in all other EU regions. While Poland is the EU region with the highest input share of electricity among EU regions, electricity prices climbed the highest in Poland for all scenarios, but in ETS first, electricity prices rose the most -by 107% relative to the reference scenario in Poland. Germany was the region with the second largest electricity price rise in all three scenarios.
Northern EU's relative CO 2 abatement in ETS first was not as high as Poland's, but still higher than the relative abatement of most other EU regions. The same applied for the GVA share of the ETS sector. Northern EU's ETS sector as a whole showed the best GVA development compared to the reference scenario in ETS first among the EU regions. In ESR more and Optimal, relative GVA increase in the ETS sector also exceeded that of all (ESR more) or most (Optimal) of the other regions, but not as much as in ETS first. Relative losses of the Non-ETS sector were rather strong, but not as strong as in Poland. Again, northern EU's exports were hardly concerned with northern EU's trade balance at its highest values relative to Reference.
In the ESR more scenario, the Spain and Portugal region experienced the highest relative GDP loss (−1.95%) among the EU regions and south-eastern EU the lowest (−0.79%). ESR more was the worst scenario from a macroeconomic view for Spain and Portugal, France, northern EU, and UK, and the best scenario for Poland, Italy, Benelux, and Germany.
The Spain and Portugal region has the second largest service sector among EU regions. In general, GVA share of Non-ETS industries in Spain and Portugal is slightly higher than in most other EU regions. At the same time, the Spain and Portugal regions provided the largest Non-ETS abatement relative to the reference scenario among EU regions in ESR more. After all, overall GVA of the Non-ETS sector faced the highest relative losses in Spain and Portugal compared to the EU regions in ESR more. For Spain and Portugal, France, northern EU, UKI, and Italy, service sector GVA declined most in ESR more.
South-eastern EU was least affected among EU regions in ESR more despite facing the highest relative overall and ETS-abatement compared to the other EU regions. Weighted and Non-ETS CO 2 prices showed the strongest relative rise among EU regions. However, absolute weighted CO 2 prices for south-eastern EU were still rather low, absolute Non-ETS CO 2 prices were even the lowest among EU regions. In the end, the ETS sector in southeastern EU increased its GVA more than EU27 + UK average percentage-wise, and the Non-ETS sector experienced better relative GVA development than that of all other EU regions in ESR more.
The conclusions of this section can thus be summarised as follows: • A strong reduction requirement on the ETS side can lead to an increase in electricity prices that could impede electrification; • On the EU level, strong abatement requirement on the ETS side not only affects the energy-intensive ETS industries themselves. Depending on how well they are able to pass through increased costs, high abatement in the ETS sector can even harm the Non-ETS sector in particular; • Economically weaker regions of the EU tend to have a greater additional economic burden in the Optimal and ETS first scenario than the others (see Table 2 below). Compensation mechanisms should be created to offset these burdens if the focus of additional abatement is not on the ESR side; and • In the short-term (2030 perspective), a focus on increased ESR reduction instead of ETS reduction might be economically more favourable from an EU-wide perspective. From the perspective of the single EU regions, this cannot be said in all cases. For economically weak regions, economic losses tend to be limited in the short-term if stronger Non-ETS reduction with continued differentiated effort sharing is applied.
Burden Sharing in 2050
Although the focus of this paper was on the implications of the tightened targets of the Green Deal in 2030, we consider it imperative to also examine the long-term effects of the tightened targets (i.e., climate neutrality in 2050). For these considerations, only TIMES PanEU will be used in the following.
We begin with the total GHG emissions for the individual countries shown in Figure 10, which arise under the three different scenarios: Compensation mechanisms should be created to offset these burdens if the focus of additional abatement is not on the ESR side; and • In the short-term (2030 perspective), a focus on increased ESR reduction instead of ETS reduction might be economically more favourable from an EU-wide perspective. From the perspective of the single EU regions, this cannot be said in all cases. For economically weak regions, economic losses tend to be limited in the short-term if stronger Non-ETS reduction with continued differentiated effort sharing is applied.
Burden Sharing in 2050
Although the focus of this paper was on the implications of the tightened targets of the Green Deal in 2030, we consider it imperative to also examine the long-term effects of the tightened targets (i.e., climate neutrality in 2050). For these considerations, only TIMES PanEU will be used in the following.
We begin with the total GHG emissions for the individual countries shown in Figure 10, which arise under the three different scenarios: It can be observed that, from a cost-optimal system perspective, some of the countries had to bring their total GHG emissions into negative range by utilising electricity from biomass plus CCS in order to compensate for the remaining residual emissions of other countries, with Spain accounting for the largest total negative emissions of up to −65 Mt per year, depending on the scenario.
It should be noted that negative emissions from BECCS were used in all scenarios for the reasons mentioned in Section 3.1. For this reason, Optimal and ETS first barely differed in 2050, as in both scenarios, more than 100% mitigation took place in the ETS sector. It is noticeable that fewer negative emissions were needed in ESR more. This is mainly due to mitigation in the agricultural sector, where technological mitigation options are extremely costly. However, when they were necessary due to national requirements and the general ESR reduction of 95%, this also reduced the required BECCS use to compensate for agricultural emissions compared to the Optimal scenario. It can be observed that, from a cost-optimal system perspective, some of the countries had to bring their total GHG emissions into negative range by utilising electricity from biomass plus CCS in order to compensate for the remaining residual emissions of other countries, with Spain accounting for the largest total negative emissions of up to −65 Mt per year, depending on the scenario.
It should be noted that negative emissions from BECCS were used in all scenarios for the reasons mentioned in Section 3.1. For this reason, Optimal and ETS first barely differed in 2050, as in both scenarios, more than 100% mitigation took place in the ETS sector. It is noticeable that fewer negative emissions were needed in ESR more. This is mainly due to mitigation in the agricultural sector, where technological mitigation options are extremely costly. However, when they were necessary due to national requirements and the general ESR reduction of 95%, this also reduced the required BECCS use to compensate for agricultural emissions compared to the Optimal scenario.
The countries that dropped into the negative emissions range were, due to the use of BECCS, countries with high potential to grow biomass on arable land [36]: France, Spain, Romania, and Poland as well as partly Sweden or Hungary. In these countries, there was therefore automatically BECCS potential due to their biomass potential. It can also be seen here that it is predominantly these countries that benefited from ESR more, as it was no longer necessary to compensate for the residual emissions of other countries (except for Poland, which suffered from the 80% reduction in the ESR sector).
In accordance with the analysis in Section 4.2, we wanted to use not only the emissions, but also the impacts of the reductions in the system costs to evaluate the burden-sharing between the regions; these are shown for this reason in Figure 11. The countries that dropped into the negative emissions range were, due to the use of BECCS, countries with high potential to grow biomass on arable land [36]: France, Spain, Romania, and Poland as well as partly Sweden or Hungary. In these countries, there was therefore automatically BECCS potential due to their biomass potential. It can also be seen here that it is predominantly these countries that benefited from ESR more, as it was no longer necessary to compensate for the residual emissions of other countries (except for Poland, which suffered from the 80% reduction in the ESR sector).
In accordance with the analysis in Section 4.2, we wanted to use not only the emissions, but also the impacts of the reductions in the system costs to evaluate the burdensharing between the regions; these are shown for this reason in Figure 11. Due to the negative emissions, ETS first and Optimal hardly differed in system costs. There were only slight differences between the regions. In the case of the ESR more, however, there were very distinct differences in 2050. Due to the condition that all countries had to reduce at least 80% in the ESR sector, some regions were burdened significantly more than in the Optimal case. Italy, south-eastern EU, the Benelux countries, and Poland are particularly worth mentioning here.
Countries such as Bulgaria, Cyprus, Lithuania, Latvia, Malta, or Slovakia had to reduce significantly further in the strict regime of the ESR more than would have been optimal from a system perspective. The main reason for these different emission levels is the role of agriculture, which represents the most expensive reduction options in the system. A lot of technical effort has to be made here, which costs money. If livestock farming accounts for a relatively larger share of emissions, it becomes tremendously expensive for these countries to further mitigate these emissions. Therefore, these emissions were often not reduced to the maximum possible in the Optimal, but instead, were compensated by negative emissions.
However, due to the stipulation of a 95% reduction in the entire ESR regime, all coun- Figure 11. Deviation of the annualised system costs in 2050 compared to the reference scenario.
Due to the negative emissions, ETS first and Optimal hardly differed in system costs. There were only slight differences between the regions. In the case of the ESR more, however, there were very distinct differences in 2050. Due to the condition that all countries had to reduce at least 80% in the ESR sector, some regions were burdened significantly more than in the Optimal case. Italy, south-eastern EU, the Benelux countries, and Poland are particularly worth mentioning here.
Countries such as Bulgaria, Cyprus, Lithuania, Latvia, Malta, or Slovakia had to reduce significantly further in the strict regime of the ESR more than would have been optimal from a system perspective. The main reason for these different emission levels is the role of agriculture, which represents the most expensive reduction options in the system. A lot of technical effort has to be made here, which costs money. If livestock farming accounts for a relatively larger share of emissions, it becomes tremendously expensive for these countries to further mitigate these emissions. Therefore, these emissions were often not reduced to the maximum possible in the Optimal, but instead, were compensated by negative emissions.
However, due to the stipulation of a 95% reduction in the entire ESR regime, all countries were forced to make greater reductions, which is also reflected in 1.5% higher EU-wide system costs in ESR more compared to the Optimal scenario. Only countries with high negative emissions in the other scenarios benefited from this. Since these costs were not credited in any form according to the current status, but only burdened their own energy system, they benefited from a scenario in which residual emissions from the ESR were low (e.g., France, Spain, Romania).
The contrast to the Energy System in 2030 becomes obvious: while weaker countries should achieve more in the ESR in the medium-term from a system perspective, they should not be required to reduce too much here in the long-term. Especially in countries where agriculture accounts for a large share of the energy system and emissions, such targets would only be achievable with great effort and at high cost. In this case, the burden distribution between countries would be disadvantageous.
From a system perspective, the use of BECCS is unavoidable and makes sense in many countries. From an economic, political, and financial point of view, however, it is not yet reasonable for these countries. In order to realise the negative emissions necessary from a system perspective, regulatory or financial incentives must be established. It is conceivable, for instance, that the ETS could provide a payment for negative emissions that would make the use of the technology profitable. Regardless of the concrete design of these incentives, the generation of negative emissions must become economically attractive, otherwise the achievement of complete climate neutrality is unrealistic.
The conclusions of this section can thus be summarised as follows: • A strict reduction regime that imposes strong mitigation targets on all countries is not optimal from a system perspective, which is reflected in 1.5% higher EU-wide system costs; in particular, countries with large shares of agricultural emissions should not be given overly ambitious targets; and • To compensate for these residual emissions, countries with high biomass potentials should produce negative emissions. Countries that provide these additional mitigation burdens beyond climate neutrality must be financially compensated.
Discussion
In order to better correlate the TIMES and NEWAGE findings, central results of the models were compared and interpreted in the following.
In Section 4.3, NEWAGE determined a higher electricity price for the ETS first scenario, which may indicate a hindrance for electrification. TIMES identified an identical problem, since the transport sector is electrified to a much lesser extent in this scenario; in ETS first, electricity accounts for only 11% of the final energy consumption compared to 15% in the Optimal scenario (see Figure 5b). However, this effect was not observable in other sectors, as described in Section 4.1. Although higher electricity prices also resulted in TIMES, the prices for district heating, for example, rose much more sharply in the building sector, which is why the higher electricity price is not reflected here.
The GDP losses arising from burden sharing within the EU as calculated by NEWAGE (see Section 4.3) showed a slightly higher burden on the EU as a whole for ETS first than for the Optimal scenario for 2030 (−1.41% compared to −1.20%). Although TIMES PanEU in Section 4.2. saw a larger discrepancy in total system costs between these two scenarios of +2.6% for ETS first compared to +1.3% in the Optimal scenario or +1.5% in ESR more, the direction was the same. A too strong focus on mitigation in the ETS sector is not cost-optimal from both a macroeconomic and an energy system perspective to achieve the goals of the Green Deal.
We considered negative emissions due to agriculture to be indispensable, but there are certainly studies that have arrived at other results. The absolute level is, of course, directly dependent on the assumption of a decline in livestock farming. Should the 50% prove to be unrealistic, the necessary compensation through negative emissions would be significantly higher, as the absolute values of negative emissions, distributed over the countries, is dependent on our assumed potentials for biomass.
As economically weak EU regions were impacted most negatively in the Optimal scenario from an energy system cost and macroeconomic perspective, a central finding of our work is the necessity of compensation mechanisms. Further research should be conducted in this field, especially with regard to the concrete design of these measures. An expansion of the Just Transition Fund would be conceivable, but more in-depth studies should be conducted on this.
Furthermore, we considered coal phaseouts as national climate measures in the context of this article. We are aware that there are a large number of national climate targets that are planned or have already been implemented. We have decided not to include these measures in order to limit the horizon of the study, but these national programmes should be examined in terms of their interactions with the Green Deal.
•
A too heavy focus on reductions in the ETS sector in 2030 leads to two negative effects. First, district heating, which optimally contributes to decarbonisation, is deployed less in the building sector (8% of final energy consumption in ETS first compared to 12% in the Optimal scenario). Second, tightened targets for the power sector lead to higher electricity prices, meaning that electric options are less deployed in the transport sector, which results in higher emissions in that sector. Reductions should be made in a way between the ESR and ETS sectors so that an emissions budget ratio of 61 to 39 results, similar to what the EU Commission proposes. • From an energy system perspective, economically weaker countries should reduce their emissions significantly more by 2030 than previously envisaged in the ESR targets in order to achieve the EU-wide −55% targets at optimal cost. The economically weaker regions need to additionally reduce their emissions in the Optimal scenario by up to −33% compared to the current targets. • However, the macroeconomic studies show the high economic burdens that result from distributing emission budgets according to a gross EU27 + UK optimum, which makes support via compensation measures absolutely necessary. Depending on the policy scenario applied, GDP losses in the range of −0.79% and −1.95% relative to baseline are found for single EU regions. • An equally strict mitigation regime for all countries in 2050 is not optimal from a system perspective, which is reflected in 1.5% higher EU-wide system costs. In particular, countries with large shares of agricultural emissions should not be given excessively strong targets.
•
In contrast, countries with high biomass potentials should generate negative emissions to compensate for these residual emissions. Countries that shoulder these additional mitigation burdens beyond climate neutrality must be financially compensated. Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Not applicable.
Data Availability Statement:
The data presented in this study are available on request from the corresponding author. Acknowledgments: The authors gratefully acknowledge the support of the Open Access Publication Fund of the University of Stuttgart.
Conflicts of Interest:
The authors declare no conflict of interest. Figure A3. Deviation of EU27 + UK regional CO2 emissions in 2030 relative to the reference scenario. Figure A3. Deviation of EU27 + UK regional CO2 emissions in 2030 relative to the reference scenario. Figure A3. Deviation of EU27 + UK regional CO2 emissions in 2030 relative to the reference scenario. Figure A3. Deviation of EU27 + UK regional CO 2 emissions in 2030 relative to the reference scenario. Figure A4. Deviation of EU27 + UK regional electricity prices in 2030 relative to the reference scenario. Figure A5. EU27 + UK regional absolute weighted CO2 prices in 2030 [€2020] Figure A6. EU27 + UK regional absolute Non-ETS CO2 prices in 2030 [€2020]. Figure A4. Deviation of EU27 + UK regional electricity prices in 2030 relative to the reference scenario. | 2021-12-02T16:09:47.435Z | 2021-11-29T00:00:00.000 | {
"year": 2021,
"sha1": "386907de774b0bbee6b01e524fdf75736ba1bf72",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1996-1073/14/23/7971/pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "a72bef95226417e3c6441ff3fd5ea7f3619cc6f7",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": []
} |
224809785 | pes2o/s2orc | v3-fos-license | Parent perspectives on preschoolers’ movement and dietary behaviours: a qualitative study in Soweto, South Africa
Objective: Childhood obesity is of increasing concern in South Africa, and interventions to promote healthy behaviours related to obesity in children are needed. Young children in urban low-income settings are particularly at risk of excess adiposity. The current study aimed to describe how parents of preschool children in an urban South African township view children’s movement and dietary behaviours, and associated barriers and facilitators. Design: A contextualist qualitative design was utilised with in-depth interviews conducted in the home setting and analysed using reflexive thematic analysis. Field notes were used to contextualise findings. Setting: Four neighbourhoods in a predominantly low-income urban township. Participants: Sixteen parents (fourteen mothers, two fathers) of preschool-age children were recruited via preschools. Results: Four themes were developed: children’s autonomy and the limits of parental control; balancing trust and fears; the appeal of screens; and aspirations and pressures of parenthood. Barriers to healthy behaviours included children’s food preferences, aspirations and pressures to consume unhealthy foods, other adults giving children snacks, lack of safe places to play, unhealthy food environments and underlying structural factors. Facilitators included set routines, the preschool environment, safe places to play and availability of healthy foods. Conclusions: Low-income families in Soweto face many structural challenges that cannot easily be addressed through public health interventions, but there may be opportunities for behavioural interventions targeting interpersonal and organisational aspects, such as bedtime routines and preschool snacks, to achieve positive changes. More research on preschoolers’ movement and dietary behaviours, and related interventions, is needed in South Africa.
Department of Health in 2018 for tracking the health of children under the age of 5 years also outlines healthy dietary behaviours (20) .
A recent study examining preschoolers' movement behaviours and gross motor skills in Soweto highlighted that many young children were short sleepers and seemed to be going to bed late (7) . Concerns around South African children's diets include the lack of dietary diversity in young children's diets, as many are consuming a diet that is high in starchy foods and low in fruits and vegetables, and the consumption of sugar-sweetened beverages, salt and fast food (21,22) . It is important to understand the contexts in which these behaviours occur, as it is recognised that factors like socio-economic circumstances and elements of the built environment have a bearing on childhood obesity and related behaviours (23,24) .
The complex web of influences on childhood obesity has been conceptualised through various applications of the social-ecological model of health (25,26) . According to these, there are several levels and spheres of relevance to childhood obesity and related dietary and movement behaviours, including individual characteristics of a child, home and family settings, preschools, neighbourhoods and wider cultural, societal and political aspects. Primary caregivers have unique insights when it comes to preschoolers' health-related behaviours, and barriers and facilitators to these (27) . A better understanding of the constraining or enabling role of the environments in which preschoolers live can inform future behavioural interventions aiming to benefit low-income communities (27) .
The current study aims to describe how parents of preschool-age (3-5 years) children in Soweto, an urban and predominantly low-income township in Johannesburg, view children's health behaviours and to situate these perspectives in the context of preschoolers' homes and wider environments. Findings regarding parents' perceptions of childhood obesity and preschoolers' size and weight have been published previously (28) , whereas the focus here is on obesity-related behaviours.
Theoretical approach and study design
The current study is situated within a contextualist epistemological approach to qualitative inquiry (29) and utilises semi-structured in-depth interviews, field observations and reflexive thematic analysis (29,30) . This approach enables a focus on individual experiences and perceptions in relation to the environment. What people say reflects their reality, and where possible, individual accounts are contextualised by observations of the environment in which study participants live and any relevant interactions between the individual and other levels of social-ecological models of health (25,26) , such as family circumstances or community characteristics. Individual interviews with parents and field observations were selected due to the insights that using these methods could elicit regarding both family and home circumstances, and other levels of the social-ecological model. The current study is reported in accordance with the Standards for Reporting Qualitative Research (31) .
The study setting Soweto is a densely populated, urban township in Gauteng Province, South Africa. Colonial and apartheid urban planning policies of racial segregation assigned most of Soweto to Black African residents (32) , and the majority of the population is still of Black African descent. The four neighbourhoods included in the current study represent relatively safe and socio-economically diverse parts of Soweto with predominantly formal housing. They were selected due to prior research contacts with preschools that had previously facilitated contact with parents regarding children's participation in other studies. The preschools facilitated initial contact with potential participants who had children in the relevant age group, but they played no further role in the study.
Recruitment
Details of recruitment are described in detail elsewhere (28) . In short, recruitment consisted of a combination of purposive (according to inclusion criteria such as being the primary caregiver of a 3-5-year-old child who attends a specific preschool, fluency in English and different socioeconomic situations) and convenience (driven by availability of participants) sampling through four local preschools. New participants were recruited gradually until there was sufficiently rich and diverse data to fulfil the study's aims. A conscious effort to recruit fathers into the study was made, but only very few potential participants were men.
Written and verbal information about the study was provided to each potential participant, and written informed consent to participate was obtained after opportunities to ask questions about the research and participation. This process was done by the first author with the help of a local fieldwork assistant, who had been trained in relevant qualitative research methods and ethics.
Data collection and analysis
Data collection comprised in-depth interviews and complementary field notes completed by the first author. The interviews were audio-recorded (Philips DVT4010 VoiceTracer) and transcribed verbatim. This process and the materials used are described in more detail elsewhere (28) and in the online Supplementary Material. Interview transcripts were analysed using reflexive thematic analysis according to the process developed by Braun & Clarke (29,30) , and contextualising field notes were used to support interpretations. Analysis software MAXQDA (Release 12.2.0) was used to support transcription, coding and data management. The coding and resulting theme development carried out by the first author was inductive, data-driven and focused on manifest content so as to be open to new concepts or patterns throughout the analysis, and to avoid potentially misguided latent interpretation in this cross-cultural qualitative inquiry. The two other authors acted as 'critical friends' (33) , supporting the refinement of themes. Table 1 summarises socio-demographic characteristics of the sixteen study participants; further details can be found elsewhere (28) . All participants were Black South Africans and the preschool child's biological parent. Most participants were single and lived together with extended family in typical detached houses with electricity and running water that also often have additional rooms or shacks for family members or tenants in the backyard (34) . Some participants were such tenants. The socio-economic circumstances of participants ranged from those who were unemployed and not able to rely on much family support to participants who were living with family members who had a stable income while they were studying or also employed. Traffic-related safety concerns were prominent across the neighbourhoods. However, some participants lived on quieter or more spacious streets where children frequently played in the street. All homes had gated yards of varying sizes.
Results
Through the thematic analysis process, four themes were developed that capture parent perspectives on children's obesity-related health behaviours, and associated barriers and facilitators. The four themes are children's autonomy and the limits of parental control; balancing trust and fears; the appeal of screens; and aspirations and pressures of parenthood. Underlying all four themes is an element of tension and complexity, which reflects the challenges and nuances of parenthood the parents in the current study communicated in the interviews.
Children's autonomy and the limits of parental control The tension between children's autonomy and the limits of parental control is exemplified in how parents in the study talked about their preschoolers' health-related behaviours and routines, and eating and sleeping in particular. Illustrative quotes are provided in Table 2.
Many children were described as fussy eaters, and this often resulted in young children having considerable autonomy regarding food. Parents showed awareness of foods that should be limited, but this conflicted with their ability or willingness to set boundaries for children. This appeared to be amplified by the food environment around the home, which was described as fuelling children's desire for certain foods and enabling unhealthy snacking. Parents' concerns about unhealthy foods were not only about sugar, fat or salt, all of which were flagged as unhealthy by participants, but many also talked about foods sold in the neighbourhood being potentially expired or cooked in unhygienic conditions. Many participants described themselves as being responsible for buying and preparing food at home, and an element of this role was keeping everyone in the household happy. Some described consulting children and other family members about their preferences, and others simply let children decide for themselves what they wanted to eat, within the limits of what was available and affordable. This was described as normal and practical, except for when children preferred fast food that the family could not afford. The limits of parental control were manifest both in the home and in relation to parents' ability to monitor or influence what children were eating outside the home. Even though pocket money was often described as something for school-age children rather than preschoolers, it was evident from both participant accounts and fieldwork observations that very young children were also given small amounts of pocket money. Since children often played in the street outside their house in groups without much adult supervision, they were able to buy sweets and other snacks sold by street vendors and in small stalls (tuckshops) that were either in or very near the areas in which they played. Some parents also described the challenges of other adults, such as relatives or other parents, giving their children fast food or unhealthy snacks, as this was again outside of their control.
Parents tended to consider preschools a healthy setting for children, involving elements of structure and control. This contrasted with the limited control many parents described having over their children's dietary behaviours. However, parents did have some influence on what their children ate at preschool through the afternoon snacks they were expected to send with their children. The options described by parents were sweetened fruit drinks, yogurt, fresh fruit and small bags of potato chips. Seeing as many parents described bulk-buying of such snacks, potato chips or other non-perishables were usually preferred over the healthier option of fresh fruit. It was also clear from field observations that children sometimes brought sweets with them to preschool. As one mother explained (see Table 2), a teacher had tried to regulate the snacks children could bring, but this was unlikely to be successful due to peer pressure and inconsistent practices among parents. In these ways, decisions that were under parents' control sometimes conflicted with the healthy routines promoted by preschools, for both financial and practical reasons, and due to parents catering to children's preferences.
It was also practical to allow children to decide when to go to bed. Children not wanting to obey adults, or not being tired when the parents went to sleep, were cited as reasons for allowing children to set their own bedtimes. In many homes, children of different ages were allowed to stay up until they were ready to go to sleep, and younger children's greater need for sleep compared to older children was thus not always recognised. Preschool-age children were described as often wanting to sleep at the same time as parents or grandparents, and parents who did have more established routines around bedtimes often pretended to sleep until the preschoolers fell asleep. Some parents therefore found ways to harmonise children's preferences with healthy behaviours through developing specific routines.
Balancing trust and fears
This theme captures the uneasy relationship parents have with the neighbourhood in which they live, and illustrative quotes are summarised in Table 3. Many participants were positive about their neighbourhoods in general, but had concerns about their children playing outside in terms of safety. Several parents had grown up in the same area themselves, and they tended to reason that it must be fine for their own children to play outside before it gets dark because they had done the same as children. Nevertheless, fears regarding reckless and dangerous driving in the neighbourhood, and the possibility of children getting kidnapped and murdered, were often expressed. Table 2 Interview excerpts about autonomy and control 'We don't know when they go outside the gate what they're buying : : : It's better that whatever that they want they tell you, "I want this and that and that", and then you buy for them. 'Cos for me, I prefer to buy everything. They know that they always have juice, crisps, everything, they have burgers, whatever.' (Interview 9, mother) 'You'll find that when he's with his friends, the parents will buy them a packet of sweets or chips or whatever, and they'll be eating and he'll come back with a blue tongue (from sweets), you know.' (Interview 14, father) 'I'd say she's healthy when she's at (pre)school 'cos when she's home she eats a lot of sweet and junk food.' (Interview 1, mother) 'When they're going to crèche (preschool) you have to put for them snacks. Maybe I'm putting for him the Zoom, you know the Zoom, the juice Zoom : : : And yoghurt and snacks. Every day I have to put for him.' (Interview 6, mother) 'We were at an opening meeting at crèche, and there is a teacher there who : : : was telling us no kids will bring Simbas (potato chips) or any sweet here. Next year you only give them a bottle of water and a fruit : : : So if there is a kid there who came maybe with a packet of Simbas then I take my son with an apple then he comes back crying, 'Why aren't you buying me Simbas?' you know, ja.' (Interview 11, mother) 'It's challenging 'cos he is a fussy eater : : : When you cook pap (maize porridge, traditional staple food) he'll tell you he wants rice. When you cook rice he'll tell you he wants noodles, so he's very challenging when it comes to food : : : I prefer to ask him what does he prefer to eat then I can cook what he prefers to eat.' (Interview 7, mother) 'Here at home we don't have a specific time for a child to go to sleep. A child go to sleep when they feel sleepy.' (Interview 10, mother) 'They can decide for themselves. 'Cos they don't want to listen to us.' (Interview 6, mother) 'I can switch off the lights and sleep, make um pretend as if I'm sleeping. Just for him to sleep, then afterwards, wake up and do whatever I'm meant to be doing 'cos normally I read by that time, so yeah. The latest time he sleeps is 10 o'clock.' (Interview 4, mother) Playing was something that mostly took place outdoors, either in the yard or if considered safe enough, out in the street. Parents described children's playing as 'running around', and this was not an indoor activity. Some also had bicycles or scooters, which again required outdoor space. All parents recognised the need or desire of children to spend time playing outdoors, but this had to be balanced with fears about what might happen to children playing outside.
Supervision was described as essential, but many parents talked about not wanting to trust others with their children. The presence of extended family or having enough space for other children to come over to play as opposed to allowing children to go elsewhere to play alleviated the issue of having to trust adults outside one's family. There was not much active parental supervision detectable in the study neighbourhoods during field observations, and supervision of outdoor play was described as checking on children through a window or listening out for sounds of playing.
When asked what could be changed or improved about the neighbourhoods, the most consistent suggestion was for children to have access to safe parks or playgrounds. However, the complicated dynamic of trust and fears in relation to the neighbourhood was present here as well as some speculated that while it would be important to have more spaces dedicated to children, such new facilities would likely be vandalised or become unsafe.
The appeal of screens
As illustrated by the quotes in Table 4, devices such as smartphones, tablets and laptops were described as both entertainment and educational resources, which contributed to their prominence in children's daily routines. Preschoolers also regularly watched cartoons or soap operas on TV, especially in the weekends and in the evenings after it got too dark to play outside. Sometimes this was a shared family activity, and a way for parents to spend time with their children. However, the strong desire of children to play with phones and tablets was described as more of a novelty compared with the normalised activity of TV viewing.
On the one hand, parents were impressed with their young children for being able to operate devices like smartphones and tablets. On the other hand, parents found it inconvenient that children were playing with the parents' phones or other personal electronic devices, potentially deleting important information, using too much data or making noise with games.
The reasons for parents to restrict children's screen time were therefore mostly practical. The dangers of strangers on social media or content that is not meant for children also came up, but none of the parents talked about screen time itself as a potentially harmful or unhealthy activity.
Aspirations and pressures of parenthood
This theme captures the ways in which parents described aspirations and pressures in relation to being a parent, often stemming from their social and physical environment, as well as from the challenges of unemployment. This theme adds nuance to some of the tensions already presented under the first theme of autonomy and control, as it further contextualises the decisions made by parents, particularly in relation to food and the food environment. Quotes illustrating these aspirations and pressures are summarised in Table 5.
Both healthy and unhealthy foods were described as easily available in the neighbourhoods, and one challenge was maintaining healthy dietary behaviours for children when Table 3 Interview excerpts about trust and fears 'They can go out, I'm fine but yoh (colloquial exclamation) I don't like the idea of them going out, I really don't like it. But I don't have a choice, they're kids. I'm not gonna lock the doors every day! I can't, nobody locked the doors for me so (laughing) I can't do that.' (Interview 4, mother) 'Uhh it is not safe (for kids to play outside the yard) : : : Like kids now get kidnapped, murdered, so that's the reason.' (Interview 11, mother) 'I don't feel safe. It's better that he's in the yard because you won't be depending on other people to look after your kid : : : I don't trust my neighbours.' (Interview 9, mother) 'Ja then I'm just thinking like there's no other extramural activities around. Because even if they do create, our own people mess it up so, it's quite, it's quite tough : : : Like when you get to have like nice parks with nice activities, bins and everything, but then because of people that are exposed to other things they go and damage. So now the kids don't have those places anymore.' (Interview 16, father) Table 4 Interview excerpts about the appeal of screens 'He wants to play games with it and by the end of the day : : : I find my phonebook deleted : : : I keep it away. But then whatever chance or whatever opportunity that he gets he will definitely take it and I find him trying, just pressing and pressing : : : So yeah, the last time he found out my pattern. I found him already opening my phone and playing the game already. So, they're quite smart. They're quite smart.' (Interview 16, father) 'I bought them these tablets but not like it's for games, it's not really games. It's for learning but it's too loud. Yoh! (colloquial exclamation) And obviously they need it volume up. Play it, volume up, ohh. So I took out the batteries. It must come out. It's too much. So they sing along to that. Your ABCs, your 123s.' (Interview 15, mother) others could be seen to do something different. The way in which the affordability of different foods was described paints a somewhat complex picture. Vegetables and other foods described as healthy were reportedly cheap and available everywhere, but many parents expressed concerns about being able to afford enough food for their families, and some preferred to spend their money on more filling foods than vegetables. While financial constraints were often cited in relation to buying what were seen as luxurious or unhealthy foods, such as takeaways, some parents also found it easier and cheaper to buy local fast food compared with buying all the ingredients for cooking a meal. Parents talked about wanting to make their children happy, and while loving and caring for children was typically described as spending time together, there were also aspirations related to taking children to places like malls, cinemas and fast food restaurants. A typical narrative was expressing that love and care are the most important things parents provide for their children, and that love is not really about money, but there are things money can buy that could make children happier or that would make parents feel like they really are doing their very best for their children. Only a few families were able to spend money on such things regularly, as many were affected by unemployment and found it difficult to afford everything they needed each month. One mother's fear that children might turn to stealing as they got older in order to compensate for parental shortcomings illustrates the severe pressure some parents felt. The feeling of not being able to give your children everything you want was one of the most difficult challenges parents described facing. The role of extended family was described as compensating where parents were lacking, for example, in buying presents or treating children to takeaway meals.
Despite the pressures and sometimes unachievable aspirations, most parents in the study described themselves as being happy with their situation. Typically, the only desired change was finding work, or if already working, getting a better paid job. Having a job or sufficient family support tended to mean that both necessities and luxuries could be afforded, although parenthood still came with its own challenges.
Discussion
The aim of the current study was to describe how parents or caregivers of preschoolers in an urban township setting in South Africa view children's health behaviours and to situate these perspectives in the context of the home and wider environment. Four themes were developed: children's autonomy and the limits of parental control; balancing trust and fears; the appeal of screens; and aspirations and pressures of parenthood. These themes centre on complex barriers and facilitators to healthy behaviours, and they reflect the nuanced ways in which parents in the study described their views and situations.
The participants showed varying degrees of awareness regarding health-related behaviours, and health itself was not necessarily the guiding principle in how parents made decisions that related to preschoolers' movement and dietary behaviours. Practicality, aspirations, pressures and financial constraints, among other things, played a role in this. The social-ecological model helps to conceptualise the barriers and facilitators highlighted by participants, and these are summarised in Fig. 1.
While the focus was on the social and physical environments, such as family routines and neighbourhood food environments, the interviews with parents of preschoolers also shed light on how structural factors, such as unemployment and poverty, were prominent concerns in the families' lives and inseparable from other influences on Table 5 Interview excerpts about aspirations and pressures 'We have a Debonairs (pizzeria) just next to us. Imagine the child passing by with a box of pizza here and then this one she is sitting at the gate. "Mommy doesn't buy me that, she is always making me eat carrots and all those things", can you imagine the pressure?' (Interview 12, mother) 'Love is very important. It's something that you can never buy or, it comes naturally. So that's the most important thing I guess for me. It's just to give them that : : : Being with them.' (Interview 16, father) 'You know, sometimes you need to go out and especially to the malls : : : It blesses you as a mom, at least I am doing something for my kids. Even though I know I don't have to, you know kids, even you can buy a small pizza, to them it's like you did something big, so I just buy a pizza : : : Honestly, I only do that every three months, I have to budget : : : so that I can be able to afford to take them to, to the mall.' (Interview 8, mother) 'Cos you know, without money at home, I cannot be raising my kids, you understand? We're gonna face some certain challenges : : : You find (name of older son) is nine, "Oh my mom can't buy for us food, my mum can't do that", he's gonna start going out, start robbing people, start doing funny things. Just to put food uh food on the table so I don't want that.' (Interview 4, mother) '(Members of the extended family) try to take a second parent role, you know. So, maybe if you're lacking somewhere, they'll cover up that part without you even noticing. Let's say maybe I'm not like financially stable. I cannot get my child takeaways maybe every month : : : You know. They'll be there to get him takeaways.' (Interview 14, father) 'The unemployment, hey. Yeah. Not being employed, yeah. I think that's all (I would change). 'Cos I I-I have support here at home, like we take care of each other at home but the not working part, not getting a permanent job, yeah.' (Interview 4, mother) 'Challenges, eish (colloquial exclamation), no, I don't think I have any challenges. I'm doing great. I'm doing good (laughs). Obviously it's not, it's not easy being a mom. It's not easy. And obviously you are never taught how to be a mom. Just, it's a natural thing. You don't know whether you're doing right, or you're doing wrong. But you hope that you're doing the best you can.' (Interview 15, mother) health behaviours. Other studies from Soweto have also highlighted the limits of individual agency vis-à-vis structural constraints in this low-income setting when it comes to health-related behaviours (35) . Recognising these tensions in people's lived experiences and circumstances necessitates critical consideration of what can be achieved through public health interventions, and in particular, who is most likely to benefit (36) . From a public health perspective, the findings of the current study show that there are several aspects of children's movement and dietary behaviours that could be targeted through interventions. In particular, some of the behaviours and routines that parents described around children's snacking and sleeping may be possible to address without being hindered by structural or environmental factors. While these complex constraints should also be addressed, they will require approaches beyond behavioural public health interventions. The most promising avenue for behavioural interventions may thus be the targeting of behaviours more constrained by interpersonal and organisational rather than structural or community factors (Fig. 1). It is, however, important to acknowledge that behavioural interventions are influenced by many different factors and may be more or less effective depending on the social-ecological level (37) .
Research in HIC has found parenting styles to have a bearing on preschool-age children's health behaviours, such as fruit and vegetable consumption (38,39) , and supporting parents to promote healthy behaviours could be explored in the Sowetan setting too. Preschools and parents might be able to overcome some aspects of peer pressure and unhealthy snacking through coordinated action. Similarly, in households where different family members have enough space to feasibly sleep at different times, there is an opportunity to establish bedtime routines that would ensure that preschoolers get enough sleep. Engaging families and preschools in this way could be a way to improve the degree to which young children meet sleep and dietary guidelines, given that these have been identified as areas of particular concern in South African children (7,21,22,40) .
Similarly, when it comes to screen time, the dual nature of how parents approach it as both educational and disruptive could be utilised in promoting behaviours that meet recommended guidelines. If parents are willing and able to limit screen time for certain purposes, without feeling like potential educational benefits will be lost, it may be possible to support parents to establish routines and behaviours for their preschoolers that align with guidelines and that favour educational and developmentally appropriate content. While guidelines focus on the quantity of screen time, the evidence base on the different effects on health and beyond of different types of screen time is growing, although mainly through research in HIC outside the African continent (12,41) . Intervention development to address screen time and the earlier mentioned routines around sleeping and eating in the specific context of Soweto would need to be done in a participatory way in order to fully align with people's realities and, for example, the wider household and family dynamics.
Parents worried about different aspects of their children's health and well-being. The prominence of their concerns around food safety and hygiene in the neighbourhood was understandable against the background of a recent listeriosis outbreak (42) , and their fears and lack of trust expressed in relation to children's personal safety are, unfortunately, well founded in light of statistics on violent crime and child abuse in South Africa (43,44) . While there is a growing discourse around the promotion of so-called 'risky play' in high-income settings (45) , there is a need to make outdoor play safer for children in South Africa. Even if South African preschoolers are relatively active (6,7) , it is unacceptable for it to happen at the expense of their safety and well-being. A focus on the interpersonal level in interventions and engaging parents to address barriers to healthy behaviours together may also be a way to stimulate some trust and community cohesion through parents realising they share the same concerns for Fig. 1 (colour online) Barriers and facilitators to healthy behaviours organised by levels of the social-ecological model their children. However, crime and violence will need to be addressed on multiple and higher levels, and interpersonal trust is unlikely to improve significantly without concrete improvements in safety.
It is clear from the analysis presented here that parents want their children to learn, develop and feel happy and loved. Supporting these positive notions through interventions that promote nurturing care in a way that also promotes healthier behaviours could be a promising approach. While nurturing care and parenting interventions have tended to focus on the first 2 years of life (46,47) , the findings of the current study suggest that preschoolers may also benefit from such an approach. Research exploring this type of interventions in urban townships in South Africa is beginning to emerge (48) , and rigorous evaluations are needed to determine the effectiveness of such approaches on promoting healthy behaviours.
Other research in South Africa has explored the ways in which aspirational consumption is linked to indebtedness, poverty and the legacy of apartheid (49) . Indeed, the aspirations that parents in the study expressed around buying certain things, or giving their children experiences like visiting malls, provide insights into the lived experience of poverty and inequality in a society where consumption is a way of signalling social status or well-being. It is evident from the parents' narratives that consumption can also signify parental love and the pursuit of making one's children happy. It may be difficult to steer these types of aspirations towards more health-centred ideals without also introducing an element of judging parents for the choices they make. Moreover, given the framing of fast food consumption as occurring on special occasions, it may not be a pivotal part of children's diets on which to focus.
A review comparing the associations between parenting practices and child health and developmental outcomes in Sub-Saharan Africa with those in HIC found that such associations were broadly similar across country settings in the existing evidence base (50) . This points to the transferability of such findings. However, despite similarities or theoretical soundness of findings from different settings, it should not be assumed that qualitative evidence from HIC can directly inform interventions elsewhere.
There are both similarities and differences when comparing qualitative evidence of parent perspectives on preschoolers' movement and dietary behaviours from other settings to the findings from Soweto. A recent study examined such parenting practices in Brazilian immigrant families in the USA (51) . The Brazilian parents actively discouraged screen time in favour of physical activity and set boundaries in a more health-centred manner than the Sowetan parents, who mostly restricted screen time for more pragmatic reasons. This suggests differences in awareness about the health implications of movement behaviours. Similarly to the Sowetan context, traffic-related concerns, financial constraints and limited space were cited by the Brazilian parents as limiting opportunities to play and be active. As for setting boundaries related to food and eating, a study with Nepali mothers of children aged 5-10 years also reported mothers feeling powerless due to both children's preferences and obesogenic environments (52) . Giving children what they want is understandably a desire felt by parents across settings, and children's preferences are a commonly reported factor contributing to unhealthy dietary intake (16) .
The findings of the present study both support some other qualitative findings and add new perspectives and nuance to the growing qualitative evidence base on preschoolers' movement and dietary behaviours. The findings further underscore that Soweto is a dynamic and complex setting in which to promote health. Much needed improvements in livelihoods and employment opportunities may also mean increased opportunities for consuming unhealthy foods. In engaging parents in preschoolers' health behaviours, it would be important to promote healthy routines in a non-judgmental way and try to inspire healthaligned aspirations by, for example, making the case for healthy behaviours promoting children's development and learning, which is evidently important to parents (48) .
There are some strengths and limitations that relate to the design and methods of the current study. Given the complexity illustrated in social-ecological models, many other perspectives, including children's own views and those of, for example, preschool staff (53) , are also relevant to explore but were beyond the scope of the present study. A specific limitation is the small number of male participants, and the fact that no other caregiver types, such as grandparents, aunts or uncles, were recruited into the study despite their relevance as primary caregivers of many children in Soweto. Moreover, asking parents about health-related behaviours inevitably involves some social desirability bias as their responses are an expression of how they wish to be viewed as parents. Although interview findings were contextualised and triangulated with the help of field observations, it is impossible to determine whether specific claims made by participant were influenced by social desirability bias. In addition to social desirability bias, the socio-economic imbalance between participants and researchers may also have encouraged participants to emphasise negative aspects of their experiences due to expectations held towards the research team or university. However, there were no requests of any kind made by participants, and accounts of difficult experiences or circumstances were sometimes followed by observations that talking about them had felt helpful. Overall, the open and inductive approach of the qualitative inquiry seemed to allow for nuanced and in-depth accounts from research participants despite the likely barriers introduced by the cross-cultural and once-off nature of the interviews (54,55) .
It is important to reflect on the role of the authors in shaping these analyses, and Abimbola's framework on authorial reflexivity is useful here (56) . One author is a South African citizen, and all three are White women with no personal experience of being a parent in Soweto. No claims are therefore made about possessing a fully local perspective from which to write this article. Similarly, the research involves a foreign gaze, situating these findings in an international, HIC-dominated literature around health behaviours and behavioural interventions. The interpretations presented here may thus differ from those made with the benefit of a more emic perspective throughout the study. In particular, the focus on informing interventions may have resulted in simplified assumptions about children's behaviours and circumstances. Nevertheless, the research was guided by trying to understand rather than judge participants, and the authors are responsible for any misrepresentations or misunderstandings.
Conclusions
The current study paints a complex picture of preschoolers' movement and dietary behaviours in Soweto. Low-income families face many challenges that cannot easily be addressed through public health interventions, but there may be opportunities for behavioural interventions targeting interpersonal and organisational aspects, such as bedtime routines and preschool snacks, to achieve positive changes in children's health behaviours. More research on preschoolers' movement and dietary behaviours, and related public health interventions, is needed in South Africa. | 2020-10-21T13:06:19.488Z | 2020-10-20T00:00:00.000 | {
"year": 2020,
"sha1": "b7e9e02c8347c944b88f7afdde19e2e95f88bff4",
"oa_license": "CCBYNCSA",
"oa_url": "https://www.cambridge.org/core/services/aop-cambridge-core/content/view/CC5571854D9C8EC19BF3EA8360A0A783/S1368980020003730a.pdf/div-class-title-parent-perspectives-on-preschoolers-movement-and-dietary-behaviours-a-qualitative-study-in-soweto-south-africa-div.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "412c3f161c4a0f382bc4736e2ecb88350056d085",
"s2fieldsofstudy": [
"Medicine",
"Sociology",
"Education",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine",
"Psychology"
]
} |
89663467 | pes2o/s2orc | v3-fos-license | Gamma radiation induced mutation in M2 generation of Pea (Pisum sativum L.)
Distinctive measurements of gamma irradiations (6, 7, 8, 9, 10Krad and zero dosage as control) were used to assess different morphological and proximate parameters of Pisum sativum. An examination of the after effects of various treatments with control demonstrated that gamma radiations altogether influenced diverse morphological and proximate parameters. Days to germination stayed same when contrasted with control however 7Krad indicated least days to germination. Germination rate and seedling survival rate was 100 percent both in control and the irradiated seeds. Measurements of 7Krad took least days for bloom start when contrasted with control and different dosages. Flower initiation happened before in 7Krad as compared to control. Natural product start and organic product development was additionally before in 7Krad when contrasted with control. Plant tallness was fundamentally expanded in 10Krad when contrasted with control. In 7Krad number of pods per pot was high when contrasted with control. Pod length decreased in all measurements yet high pod length was seen in control. Greatest number of seeds per pod was recorded in control. Weight of 1000 seeds was high in 6Krad when contrasted with control. Proximate analysis i.e. ash demonstrated greatest sum in 9Krad when contrasted with control. Moisture was likewise high in 9Krad. Proteins were most extreme in 7Krad when contrasted with control. High fats were acquired in 8Krad as compared to control. Subsequently the ash, moisture, proteins, fats were expanded essentially with higher gamma radiation when contrasted with control.
Introduction
Pisum sativum is an annual, self-pollinated and often climbing herb. It belongs to family Fabaceae. The expression "Pea" can be cited to little circular seed or to the pod. The name "Pea" is likewise used to depict other consumable seeds from the Fabaceae, for example, chickpeas (Cicer arietinum), pigeon pea (Cajanus cajan), cow pea (Vigna unguiculata) and sweet pea (Lathyrus spp.) which are developed as ornamentals. Starting point of pea is likely South Western Asia, conceivably Northwestern India, Pakistan and couple of contiguous territories of previous USSR and Afghanistan however, from there on spread to Europe [1] based on genetically assorted variety, four centers of beginnings, in particular, Abyssinia, Central Asia, the Near East, and the Mediterranean have been recognized [2]. Pea (Pisum sativum L.) is one of the world oldest agricultural products. Archeological confirmation dates the presence of Pea back to 10000 B.C. in Near East and Central Asia. Pea is a feeble plant. It has ringlets which can help in help. It is made out of rachis finishing in a spread ringlet. Its stem is feeble, glabrous, interchange leaves, and terminal stretched ringlets, leaflets ovate or elliptic. Pea is one of the main developed harvest around 7000-6000 BC and is thought to have started from south-western Asia. Pea develop in an extensive variety of condition. Pea develop better in moderately cool atmospheres with normal temperatures in the vicinity of 7 and 24°C and in territories with 800-1000 mm yearly precipitation [3]. They can be found on a wide range of soils from sandy loams to heavy clays, provided the soil is well drained. The perfect pH of the soil is 5.5-6.5. A pH of 7-7.5 cannot hinder the development if the soil is not over limed and inclined to manganese shortage [4]. Acidic soils, high aluminum soils and waterlogged regions are injurious to pea development [3]. Hot climate and dry season pressure are especially harming to peas blooming time frame [4]. It is on rank fifth regarding significance nourishment vegetables in Turkey. It is rich in proteins, starch and absorbable supplement substance yet low in fiber substance which make it an amazing domesticated animal's feed. It is especially an essential vegetable grain in temperate regions with various nutrition (dry seed, vegetable) and nourishment (seed, feed) utilizations [5]. In the United States 550,000 MT are delivered commercially for sustenance every year and 200,000 MT of field pea for feed [6]. Pea is a high-yielding, short period crop with high proteins content. Pea is one of the prevalent trade crop in the world exchange and sum to around 40% of the aggregate exchange of all pulses [6]. Legumes are significant sources of sugars, dietary fibers, vitamins, minerals and proteins (17-40%) higher than grains (7-13%) however, circumstantially equivalents to the protein in meat (18-25%) [7-9] green and ripe fruits and seeds contain starch, proteins, oil, galactolipids, alkaloids, trigonelline and piplartine, essential oil and dissolvable sugars [10]. Cis-trans and Trans, transxanthoxin found in roots [11]. Seeds yield trypsin and chymotrypsin.100 grams of consumable part of crisp sweet pea pods contain: 67 kcalories, water 82.4 g, protein 3g, fat 0.4g, sugar 12.8g, dietary fiber 2.1g, ash 1.4g, calcium 92mg, phosphorus 48mg, iron 1.2mg, vitamin-A 52.0µg, thiamin 0.16mg, riboflavin 0.09mg, niacin 1.0mg, ascorbic acid 67.0mg. Leaf, petiole, tendril, and stems yielded kaempferol-3triglucoside, quercetin-3-triglucoside and their p-coumaric esters. Growing pea seedlings yield high amount of D-alanine. Free homoserine has been identified in the seeds and pods. Mutation breeding is potential, fast and profitable way for bringing quantitative and qualitative variability in crop plants. Mutation breeding is a process by which a gene undergoes a structural change or exchange of one nucleotide for another. Numerous crops with enhanced qualities have been obtained utilizing induced mutation [12]. Mutagenic operators, for example chemicals, x-rays, electron beam light, γ-beams and so on are typically used to induce mutation artificially. This procedure produces mutants [13,14] which could then facilitate the identification, isolation and cloning of genes for designing crops with quality yield and other quality attributes [15]. Crop mutation to modify their genetic makeup and look for desired changes to enhance the yield potential and certain fascinating characters is a typical component of research work everywhere throughout the world. There are distinctive sorts of ionizing radiation (viz., X beams, gamma beams, protons, neutrons, alpha and beta particles) yet gamma beams are generally utilized for genetic transformation as they have shorter wave length and have more energy per photon than X beams and infiltrate profound into the tissue [16,17]. Other than the financial advantages, a few mutants additionally assume an imperative part in the investigation of hereditary qualities and plant advancement. A few positive changes have been made in farming yields by utilizing gamma illuminations. Yields have effectively been produced with enhanced attributes by mutagenic acceptances [18,19]. Gamma illumination has drawn the consideration as a fast and new technique to enhance the subjective and quantitative characters of numerous yields. Gamma light has been utilized widely as a powerful physical mutagenic specialist [20]. For the modification of physiological characters' gamma radiation can be valuable [21]. Gamma rays are known to impact plant development and advancement by affecting cytological, morphogenetic changes in cells and tissue [22].
Materials and methods
The M2 seeds of pea plant were chosen for the current investigation. These seeds were obtained from M1 plants in 2013-14. Gamma irradiation utilized was created from the Cobalt-60 source, at Nuclear Institute for Food and Agriculture (NIFA) Peshawar, Pakistan. Pea seeds (M0) were irradiated with 6, 7, 8, 9, and 10 Krad while non-irradiated seeds of every variety were kept as control. An investigation was executed in 2013-2014; M0 plants brought about M1 seeds which were subjected to analyze in the present examination. This investigation was planned to assess the impact of gamma beams in M2 generation. Field Experiment A field experiment was completed in greenhouse, Department of Botany, Islamia College Peshawar, in 2014-2015. The M2 Seeds of each dosage were sown on 21 November 2014 in pots; all pots were similarly separated with parallel soil substance in each pot. Total pots were thirty in number. The design was totally random with each measurements having 5 replicas. Equal number of seeds were sown in all pots. The pots were checked frequently for water requirements. Parameters The parameters i.e. germination rate, days to germination, flower initiation, flower maturation, seedling survival rate, fruit initiation, fruit maturity, number of pods per plant, pod length, number of seeds per pod, plant stature, 1000 seed weight, moisture rate, ash, fats and proteins were considered in this investigation.
Statistical investigation
Exploratory information was factually examined with the Analysis of variance (ANOVA) and least significance difference (LSD) at α = 0.05 utilizing Statistics 10.0 programming.
Proximate investigation
Proximate investigation of seeds was done at National Institute for Food and Agriculture (NIFA) Peshawar, Pakistan.
Results and discussion
Gamma irradiation, being known for its mutagenic effect also showed pronounced effect on M2 generation of Pea. Pea variety responded to gamma rays in M2 generation. Following results explore the findings of present investigation. Table 1 represent the effect of gamma irradiation on days to germination of pea plant in M2 generation. The results show that days to germination were significantly affected by gamma irradiation. Days to germination decreased significantly at 7Krad (13 days) followed by 6 Krad, 8Krad, 9Krad, and 10Krad (14 days in each) as compared to control (15 days). Some researchers found that gamma irradiation causes faster seed germination. This is probably due to the fact that shortwave photons (i.e. gamma rays) are more energetic than visible light photons (> 400 nm) and, therefore, have a stronger effect on the surface of the plant cells. This causes the final breakdown of the seed coat allowing germination to occur [23]. Similar results were found by [24]. Germination percentage In (Table 1) the results revealed that the germination percentage was nonsignificantly affected by gamma irradiation. Germination percentage was kept maximum by all the doses as compared to control (100 %).
Days to germination
The stimulating effect of gamma rays on germination may be due to the activation of RNA or protein synthesis, which occurs during the initial stage of germination after irradiation of the seed [25] Similar results were found by [25]. Seedling survival percentage Table 1 represent the data which show the effect of gamma irradiation on seedling survival percentage of pea in M2 generation. Gamma irradiation kept the seedling survival percentage unchanged as compared to control (100 %). It can be concluded that gamma irradiation showed no effect on seedling survival percentage. Table 1 shows the effect of gamma irradiation on days to flower initiation of pea in M2 generation. The result show that gamma irradiation non-significantly affected days to initiation. Statistical results revealed that all the doses kept their rate parallel to control. The Days obtained are at control (92 days), 6Krad (93 days), 7Krad (90 day), 8 Krad (91 days), 9Krad (92 days) and 10Krad (90 days). Table 1 represents the effect of gamma irradiation on days to flower maturation of pea in M2 generation. Non-significant effect of gamma irradiation was obtained in M2 generation on this temporal trait. Statistical results revealed that the results were non-significant. At control (95 days), 6Krad (96 days), 7Krad (94 days), 8 Krad (97 days), 9Krad and 10Krad (98 days each) were observed. Mutants with changes in flowering and maturity time have been reported by many workers because, generally due to radiation, flowering and maturity is late [28]. Fruit initiation and maturation Data in (Table 1) represents the effect of gamma irradiation on days to fruit maturation of pea in M2 generation. At control (110 days), 6Krad (111 days), 7Krad (107 days), 8Krad (114 days), 9Krad (115 days) and 10Krad (118 days) result was found. Its proving that gamma irradiation delayed fruit initiation. Table 1 represent the effect of gamma irradiation on days to fruit maturation of pea plants in M2 generation. Fruit maturation was nonsignificantly affected. Increasing radiation delayed fruit maturation. Highest mean was obtained at 10Krad (146 days) followed by 9Krad and 8Krad (144 days and 142 days respectively) as compared to control (137 days), while lowest mean value was obtained at 7Krad (133 days), it can be concluded that gamma irradiation delayed fruit initiation and fruit maturation. . They mentioned that ionizing radiation causes inactivation of growth regulators that lead to delayed growth of plants. [32]. believe that the delay in height of the plant may be due to an increase in the production of active radicals that are responsible for lethality or due to the increase in gross structural chromosomal changes induced by radiation. Table 2 represents the data which show the effect of gamma irradiation on number of pods/pot of pea in M2 generation. Number of pods/pot decreased significantly at 6Krad (11.4 pods/pot) followed, by 10Krad (11.8pods/pot), and 8Krad (14.6pods/pot) while increased significantly at 7Krad (17.4pods/pot) as compared to control (15.2pods/pot). 9Krad showed nonsignificant effect.
Number of pods/pot
[33] screened out the high yielding mutants in chemical mutagens induced progeny of Vigna radiata and reported the increase in number of pods produced per plant and total seed yield at lower doses of chemical mutagens in Vigna radiata.
Pod length (cm)
Pod length showed a decreasing tendency with increasing radiation doses. Maximum mean value for this trait was observed at control (6.058 cm), the rest of the doses decreased pod length significantly showing that gamma irradiation decreased pod length at 8Krad (5.242 cm) followed by 9Krad (5.266cm), 7Krad (5.36cm), 6Krad (5.38cm) and 10Krad (5.409cm) as compared to control (6.058cm). Table 2 represents the data which shows the effect of gamma irradiation on number of seeds /pod in M2 generation. Gamma irradiation showed inhibitory effect on number of seeds/pod. Highest mean value was obtained for control (4.2565 seeds/pod) and the rest of the doses decreased number of seeds/pod significantly i.e. at 7Krad (3.602 seeds/pod), followed by 8 Krad (3.62 seeds/pod), 9Krad (3.768 seeds/pod), 10Krad (4.081 seeds/pod) and 6Krad (4.235 seeds/pod) as compared to control (4.2565 seeds/pod). 1000 seed weight (gram) Table 2 represents the effect of gamma irradiation on 1000 grains weight of M2 generation's pea seeds. 1000 grain seeds weight were maximum at 6Krad (200 gm) but decreased non-significantly i.e. at 7Krad (169.155 gm), 8Krad (179.028 gm), 9Krad (173.041 gm), and 10 Krad (169.13 gm) as compared to control (180.04 gm). | 2019-04-02T13:14:49.697Z | 2018-06-10T00:00:00.000 | {
"year": 2018,
"sha1": "bc80a41f963d4f56c65280b4a8e6d288f37ffb99",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.19045/bspab.2018.700102",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "eef223faf590ba24a50322e7d94534be6600df86",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Biology"
]
} |
234370392 | pes2o/s2orc | v3-fos-license | Implementing a New Technology in Diagnostic Services for Tuberculosis in Nigeria
This article analyses the experience of the Federal Ministry of Health (FMOH), Nigeria in its partnership arrangement with some international organisations to introduce a new technology to its diagnostic services for tuberculosis (TB) by adopting the use of GeneXpert test in line with the recommendation of World Health Organisation (WHO). It was a major health service reform targeted at achieving the third sustainable development goal. Therefore, the objective of this article was to review the literature on management issues and problems of change surrounding the implementation of GeneXpert as a new technology in the diagnostic services for TB in Nigeria, and provide learning opportunities for health service administrators in resource-limited settings that might be considering such arrangement. A literature survey of articles published between 2008 and 2018 was conducted; using Google Scholar, ScienceDirect and Scopus; to identify and review articles that analyse the change management process in the introduction of GeneXpert test for TB diagnosis in Nigeria. Consequently, a total of 10 articles were critically analysed. The review showed a paucity of articles that examined how change process in the introduction of GeneXpert test for TB diagnosis was managed in Nigeria. The literature survey identified several challenges in the change process such as human resource and capacity building especially because the use of GeneXpert requires a certain level of computer literacy. In conclusion, this review highlighted the fact that the reform might be sustainable because of the adoption of decentralised service system in the implementation process.
Introduction
This article analyses the experience of the Federal Ministry of Health (FMOH), Nigeria in its partnership arrangement with some international organisations to introduce a new technology to its diagnostic services for tuberculosis (TB) by adopting the use of GeneXpert test in line with the recommendation of World Health Organisation (WHO). The Genexpert test is a molecular test for TB that has the potential to revolutionize the diagnosis and care of people with TB [1]. The technological advancements in the operation of GeneXpert machines provide clients the opportunity to have their test results in less than 2 hours [2]. Meanwhile, TB is a deadly disease with significant burden on economy and human development in Nigeria [3,4]. The essay will raise some of the issues surrounding the implementation of this reform, for example; public private partnerships (PPPs) with relevant stakeholders in the health sector for technical and financial support, procurement and maintenance of the GeneXpert machine, its distribution to health facilities at the community level to ensure decentralised service, training of health personnel to build their capacity on the use of the new technology, data collection and reporting system for performance management, and monitoring and evaluation (M&E).
The existing Nigeria health service structures in the fight against tuberculosis have been in place for many decades. The effort to improve techniques for diagnosing TB has been very little in the past. The common diagnostic method for TB in Nigeria, which is "smear microscopy", is 125 years old and routinely misses half of all TB cases [5]. Hence, TB tests are outdated and inadequate in the country. The primary challenge is that only 16% of tuberculosis cases are being detected in the country [4,6]. Therefore, the nation's Federal Ministry of Health identified an urgent need to further build the system to ensure that the structures are able to deal with this challenge; in the spirit of global sustainability and wellbeing. It therefore decided in 2012 [7] to go for a policy change in the diagnostic services for TB after several consultation and engagement with relevant stakeholders. The essay aims at developing a detailed understanding of management issues and problems of change surrounding the implementation of GeneXpert as a new technology in the diagnostic services for TB, and provides learning opportunities for health service administrators in resource-limited settings that might be considering such arrangement.
Analytical description of the organisation: The delivery of TB services has undergone enormous transformation in recent decades, with new organisational forms emerging as an organ of FMOH. In I988, FMOH created the National Tuberculosis and Leprosy Control Programme (NTBLCP) within the Department of Public Health, with the aim of eliminating TB and leprosy. The organisational structure of NTBLCP is such that it has a National Coordinator as the head, supported at the federal, state and local government levels by a team of medical officers, laboratory scientists and other support staff [8]. The fight against tuberculosis has always been a collaborative effort as it involves joint efforts from governments, communities, civil society organisations and many development partners and international organisations. Meanwhile, it has been indicated that efficiency, flexibility and value for money might be guaranteed with strong collaboration among actors from both public and private sectors [9]. This practice or arrangement of 'hybrid' governance like PPPs is different from contracting-out or outsourcing because PPP has to do with risk sharing and co-production [10] while contracting-out or outsourcing is a temporary business relationship in which the government retains control and ownership. The problem with this type of arrangement as reported is the delay experienced during the negotiation stages which often results into huge advisory cost overruns [11].
FMOH used public private partnership concept as a procurement strategy in the fight against TB because in this type of arrangement private organisations initially have to finance the programme, and this offers a form of relief to national government. In early 2013, for example, a grant from the Centre for Disease Control (CDC) was used to procure some GeneXpert machines [12]. The cost of GeneXpert is challenging in resource-limited areas, and PPPs assist and reduce pressure on government budgets, even though this may be temporary and unsustainable.
Goals/impetus for the reform: This is a major health service reform targeted at achieving sustainable development goal (SDG) 3. The third SDG is to ensure healthy lives and promote well-being for all at all ages; with a specific call to end the epidemics of HIV, TB and malaria by 2030 [13]. The target proposed is a 90% reduction in tuberculosis deaths and an 80% reduction in new cases by 2030. To achieve these milestones, novel tools and approaches for finding tuberculosis cases is essential, as well as expanding access to tuberculosis services, through health systems strengthening, particularly at the primary care level. In 2010, WHO recommended the use of GeneXpert test for the diagnosis of tuberculosis globally because of the failure of "smear microscopy" to detect cases of tuberculosis in some situations such as human immunodeficiency virus (HIV) and drug resistance cases, and Nigeria adopted the use in 2012. This may therefore be analysed as a case of policy transfer which leads to a policy change or policy reform.
Methods
A literature survey of articles published between 2008 and 2018 was conducted; using Google Scholar, ScienceDirect and Scopus; to identify and review articles that analyse the change management process in the introduction of GeneXpert test as a new technology in tuberculosis (TB) diagnostic services in Nigeria. Literature review was considered for this study a proper research design seeing that there was dearth of studies which critically analyse how the change process of introducing GeneXpert in Nigeria was managed. The process of the review followed the sequential pathway of finding the research question; locating relevant research articles; picking the research articles for review; summarising and documenting; and organising, analysing, synthesising and reporting findings. Key themes such as stakeholders engagement, change environment, implementation structure, challenges of change, tasks and functions, and monitoring and evaluation were adapted and put together to develop an understanding of change management process in relation to the introduction of GeneXpert test in Nigeria.
Results and Discussion
A total of 10 articles were critically analysed. The review showed a paucity of articles that examined how change process in the introduction of GeneXpert test for TB diagnosis was managed in Nigeria, and the key themes identified were discussed below: Stakeholders consultation and engagement: The NTBLCP, led by its national coordinator, went into a strong consultation and engagement to gain support and the ensure success of this change process as many stakeholders were involved in the roll out of this new method of diagnosis. The professional bodies of doctors, nurses, laboratory workers, and other health workers were fully consulted from the planning stage to avoid resistance to the proposed change because they form the staff strength in the hospitals where the new technology will be deployed. NTBLCP convened stakeholders meeting that had in attendance people from diverse backgrounds, cultures, beliefs, values, norms, institutions, and professional affiliations; including the medial professionals who helped to create awareness and community sensitization. The Honourable Minister for Health at one of the stakeholders meeting emphasized the need for the introduction of GeneXpert in order to create readiness for change among stakeholders. He analysed the impact of TB on economy and human development to justify the need for the new technology. Meanwhile, it has been noted that readiness for change is reflected in the beliefs, intentions and attitudes of an organisation's members regarding the extent to which changes are needed and the capacity of such organisation to effectively implement those changes [14]. The media helped to disseminate a lot of information as regards the operation of GeneXpert. This is worth noting because though process changes might appear simple, it is actually complex as many policies may be affected [15]. Hence, it usually calls for a lot of consultation for example with labour unions and the public in general, which is a very long process.
The change environment: Some external pressure in terms of political, economic, technology and social factors influenced the change for GeneXpert. The limitation of "smear microscopy" as a TB diagnostic service met with international politics which can be demonstrated by WHO's call for the introduction of a better alternative method. A major challenge to the control of tuberculosis is a diagnostic process that requires multiple visits with consequent expenses from clinic fees and transport. Most patients attend clinics with companions, thereby increasing transport fee. High expenditure has been found to be associated with attending clinics with company, residing in rural areas far from diagnostic centres and illiteracy [16]. The costs usually incur by patients are very substantial. It is therefore important that clients get value for money. This is achievable with the introduction of GeneXpert as the new diagnostic service for TB. Meanwhile, it has been argued that immediately an agenda receives attention, it will spread very quickly and become impossible to stop or prevent [17]. On the other hand, punctuations occur and are attributed to external incidents that disturb the political regime, especially those that are sufficiently huge to disrupt the equilibrium. In this case, the political system responded to the challenge of increasing burden of TB on the country's socio-economic development, and the external influence or pressure that came in form of recommendation from WHO. It was indicated that when agenda-setting and policy formulation are international in origin, institutions play an essential role as epistemic community in the political stream of public policies [18]. For example, WHO, United States Agency for International Development Agency (USAID), Department for International Development (DFID), Centre for Disease Control (CDC) and other international organisations have always been supporting the "STOP TB Strategy". The aim of the network is to comprehensively strengthen the detection of TB, and eventually eliminate the disease.
Structure of the implementation arrangement:
The FMOH adopted a decentralised form of governance in its implementation arrangement to introduce GeneXpert test as a new technology in the diagnostic services of TB because it is not restricted to central or reference laboratories, which was done in the case of some conventional diagnostic tests. The GeneXpert machines were distributed to community health facilities to bring the service closer to the people and expand access to the service. The facilities were given as much information and resources as possible to empower them and give them a sort of autonomy to manage the system. This type of arrangement will increase local accountability because of its closeness to people which will give everyone the chance to contribute to the process of implementation. It will consequently create a system that is effective, efficient and capable of responding to the needs of the populace. It has been noted that health services might be better when delivered by healthcare organisations that have greater autonomy than by organisations under closer central and political control [19]. The argument against decentralisation is the financial implication as it was suggested that governance could be undertaken from the centre more cheaply [20]. However, it was argued that decentralised GeneXpert test implementation is feasible and could lead to an improvement in tuberculosis care and control [21]. A report from India indicated that GeneXpert can be deployed at the decentralised level for all TB suspects in diverse settings of the country [22]. The experience in Nigeria confirms a better service delivery with decentralised system. The closer the diagnostic service for TB is to people the better the access because of reduced waiting and travelling time. It is economically safe for the end users to have the service close to them to overcome the barrier of distance and transport fare. Nickols, however, argued that successful change is based on building a new organization and gradually transferring people from the old one to the new one in an incremental manner rather than a radical approach; to give people time to adapt to the new circumstances [23]. This is especially good for the health workers who will be using the new technology, because they are going to be learning new things, and they need time to get full understanding of the new system.
Meanwhile, NTBLCP implemented the introduction of GeneXpert test as TB diagnostic service in phases (as shown in Figure 1); to effectively manage the change process. The implementation pathway is divided into nine phases with 22 steps. The first is "introduction" phase, and it consists of steps 1 -5 as labelled below. The expected outcome is the establishment of planning and coordination body; by the name "TWG" (Technical Working Group). The second is "strategic planning" phase, and it consists of steps 6-10. The expected outcome is the development of draft national GeneXpert strategic plan. The third is "site assessment" phase, and it consists of step 11. The expected outcome is the collection of all required information for finalization of strategic plan and the annual activity plan. The fourth is "finalization of strategic plan" phase, and it consists of steps 12-15. The expected outcome is the finalization of national GeneXpert strategic plan. The fifth is "preparation" phase, and it consists of step 16. The expected outcome is the completion of laboratory renovations and development of documents and laboratory support systems such as maintenance, supervision, etc. The sixth is "training and installation" phase, and it consists of steps 17-18. The expected outcome is the commencement of routine use of GeneXpert. The seventh is "routine monitoring and supervision" phase, and it consists of step 19. The expected outcome is the assurance of quality of GeneXpert use. The eighth is "evaluation" phase, and it consists of steps 20-21. The expected outcome is the informed national policy and practice through collected evidence and experience. The last is the "scale up" phase. NB: "SWOT" stands for Strengths, Weaknesses, Opportunities and Threats. [24][25][26].
Challenges and problems of change:
The NTBLCP faced some operational, technical and logistic issues in its roll-out of GeneXpert because it was a new technology. There are several anticipated challenges such as human resource and capacity building especially because the use of GeneXpert requires a certain level of computer literacy. It was documented that the GeneXpert system consists of an instrument, personal computer, barcode scanner, and preloaded software, and uses single-use disposable cartridges containing lyophilized reagents, buffers and washes [2]. Therefore, the computer literacy of local staff required extra training [24], and there was need for dedicated personnel who could be trained, to perform testing and keep the machine in good order. Meanwhile, it has been indicated that when employees are confronted with new organisational realities, a sentimental yearning or wistful desire for how things were before the change could be an extremely significant cause of resistance on their part [15]. Hence, decision makers and employers have to employ a number of strategies to create support for change to ensure compliance and avoid resistance.
Tasks and functions: Strong coordination is required in a PPP arrangement to achieve the target goal. FMOH through NTBLCP is responsible for the coordination of the programme at the national level.
Overall global coordination of the development partners cannot be overemphasized because most of the partners are international organisations. WHO is therefore responsible for setting the guidelines at the global level. Creswell et al. noted that many international agencies and donors have already expressed interest in investing resources for the roll-out of GeneXpert [2]. Coordination of these activities is essential to optimise the use of available resources, streamline activities, and ensure sound technical advice and approaches at country level. The national GeneXpert TWG are responsible for planning and implementation activities shown in Figure 1. Ceipheid HBDC is the company responsible for the production of GeneXpert device, and it supplies the machine at a subsidized price because of the partnership arrangement [27]. KNCV Tuberculosis Foundation is responsible for the procurement, installation and maintenance of the device. It also does training for health workers on how to operate the device. Global Funds releases grants to support the programme. Civil Society organisations are involved in community sensitization. Health facilities are the service providers.
Monitoring and evaluation:
Monitoring and evaluation can be seen as a communication channel because it gives feedback to programme managers through the data collected. The source of the data is from recording and reporting. Some of the data being collected routinely are number of days in a month that the device could not be operated, and the reasons it could not be operated. Number of tests in a month is monitored as well. Data are also collected on logistic issues such as catridge supply, power stability, etc. These data will allow for the generation of simple indicators to quantify the impact of the new test on laboratory work and diagnosis of TB. However, there is a problem with recording and reporting because monitoring and evaluation system in Nigeria is paper-based. This complicates data collection because the GeneXpert register is sometimes not in place on the field or not properly filled. Nigeria should ideally move completely to an electronic recording and reporting system to generate data that is relevant to the national programme; for the purposes of monitoring and evaluation and direct patient management [24].
Conclusion
The fundamental problem in low-resource settings like Nigeria, as depicted in the "Piot" model of case finding and treatment, is that individuals seeking care are lost before treatment even begins as they are not properly and promptly diagnosed [28]. It was noted that even with the increasing evidence that a wide range of new diagnostic technologies that exist in the field of TB diagnostics can be used successfully in the most challenging settings; most of the patients that should use these technical evolutions do not yet have access to them [29]. This implies that just the roll-out of an improved TB diagnostic tool like GeneXpert might not be enough to guarantee better outcomes for patients because the processes of implementation within existing health care systems can critically affect the impact. However, there is hope that this reform will be sustainable in the country because of its adoption of decentralised service system by its distribution of GeneXpert machines to health facilities at the community level and giving them autonomy in its management. This is expected to create a system that is effective, efficient and capable of responding to the needs of the populace. Though the use of GeneXpert has gained the support of international partners with discounts negotiated for the test, the concessionary cost is only applicable to non-governmental organisations (NGOs) and the public sector [1]. This implies that it might be difficult accessing TB services at private facilities where a number of people in the country receive health services; thereby limiting access to early and accurate diagnosis, and potentially increasing "morbidity associated diagnostic delay, dropout and mistreatment" [21]. In addition, the fate of GeneXpert funding in Nigeria is uncertain as government of Nigeria is yet to commit to funding the machine in spite of assertions that GeneXpert's ability to offer early diagnosis indicated that its adoption would cost less than current diagnostic devices. There is therefore no doubt that this reform is an ambitious and a demanding one, and its successful implementation requires the engagement and commitment of every relevant stakeholders that are involved in the health system.
Recommendation
There is need for regular collection and analysis of performance data from TB centres at all levels of care; primary, secondary and tertiary health facilities in the country; to generate evidence on the feasibility and impact of introducing GeneXpert service delivery with decentralised system. On the other hand, government could leverage more on public private partnership to fund procurement and maintenance of the GeneXpert machine as it seems from a pilot analysis that GeneXpert can likely, by as much as 75%, increase TB detection [12]. Meanwhile, health-system barriers to GeneXpert use in the country must be pre-empted and resolved. For example, with procurement of additional machines, there will be need to train additional local staff to perform GeneXpert troubleshooting and maintenance [24]. This is because even the most improved and promising diagnostic technologies will have just minimal impact if they cannot be reached by those who need them. In other words, the actual impact of any diagnostic technologies or interventions depends on the system in which they are going to be deployed. Nigerian health systems therefore must be improved upon to encourage potential clients not to delay care-seeking and to give swift access to proper treatment immediately a diagnosis is received. Furthermore, it is an exigency to have renewed and concerted efforts to maximize the possible responsibilities of governments, development partners, product developers, industries, and NGOs [30] if tuberculosis must be eliminated in Nigeria. Meanwhile, the National TB Programs must provide strong leadership and coordination for the implementation process [24]. In other words, to achieve the full potentiality of promising diagnostic technologies like GeneXpert, various groups of stakeholders must give their support to large-scale innovation and delivery while civil society organisations, activists and advocates ought to ensure everyone is held accountable and make sure that publics move the call for improved systems [5]. Lastly, it is essential to have operational and implementation researchers promptly detect and address a wide range of issues that are vital in improving TB services while decision makers are encouraged to convert scientific evidence into policies and guidelines that address the full gamut of issues identified and recommendations from the researchers. | 2021-05-11T13:47:37.646Z | 2020-12-25T00:00:00.000 | {
"year": 2020,
"sha1": "cffaafd03e7f1b4765338d3bb39049362833808c",
"oa_license": "CCBY",
"oa_url": "http://article.sciencepublishinggroup.com/pdf/10.11648.j.cajph.20200606.17.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "cffaafd03e7f1b4765338d3bb39049362833808c",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Business"
]
} |
4332742 | pes2o/s2orc | v3-fos-license | A review of nebulized drug delivery in COPD
Current guidelines recommend inhaled pharmacologic therapy as the preferred route of administration for treating COPD. Bronchodilators (β2-agonists and antimuscarinics) are the mainstay of pharmacologic therapy in patients with COPD, with long-acting agents recommended for patients with moderate to severe symptoms or those who are at a higher risk for COPD exacerbations. Dry powder inhalers and pressurized metered dose inhalers are the most commonly used drug delivery devices, but they may be inadequate in various clinical scenarios (eg, the elderly, the cognitively impaired, and hospitalized patients). As more drugs become available in solution formulations, patients with COPD and their caregivers are becoming increasingly satisfied with nebulized drug delivery, which provides benefits similar to drugs delivered by handheld inhalers in both symptom relief and improved quality of life. This article reviews recent innovations in nebulized drug delivery and the important role of nebulized therapy in the treatment of COPD.
Introduction
Inhaled pharmacologic therapy is a cornerstone of treatment for patients with COPD. 1,2 Four commonly prescribed inhalation devices, pressurized metered dose inhalers (pMDIs), dry powder inhalers (DPIs), slow mist inhalers (SMIs), and nebulizers, have similar efficacies in patients with COPD, [3][4][5][6] provided they are used appropriately. Although DPIs and pMDIs are the most commonly used devices 7,8 and are recommended for long-term treatment in the vast majority of patients, 2 the Global Initiative for Chronic Obstructive Lung Disease (GOLD) strategy document recommends nebulizers for specific patient populations (eg, patients with very low inspiratory flow rates) in whom nebulizer treatment may provide more benefits than DPIs or MDIs. 2 Further, GOLD recommends evaluating the benefits of nebulizer treatment symptomatically and continuing treatment as long as similar benefits are not achievable by simpler, cheaper, and more portable alternatives. In addition, both patients and their caregivers are becoming increasingly satisfied with nebulized drug delivery and have reported benefits in symptom relief, ease of use, and improved quality of life when using this system. 9,10 Moreover, several of the emerging medications for COPD (both marketed and under development) utilize nebulizer technology. This article reviews recent innovations in nebulized drug delivery and the important role of nebulized therapy in the treatment of COPD.
Selection of articles for review
After dividing the review topic into specific subsections, articles were selected for inclusion based on comprehensive reviews of the literature according to each subsection. A PubMed search (January 1, 1996 to March 15, 2016) was conducted using 2586 Tashkin multiple primary topic headers combined with appropriate terms for each section of the article (eg, COPD + nebulizers or COPD + nebulizer therapy). The results of the PubMed search were supplemented by relevant papers identified from reference lists of published articles and the author's knowledge of the literature. Selection of articles for discussion focused on information published within the past 5 years. COPD COPD, a common preventable and treatable disease, is characterized by progressive persistent airflow obstruction that is associated with an enhanced inflammatory response to noxious particles or gases in the lung and airways. 2,11,12 COPD represents a global health problem, is ranked as the fourth leading cause of death in the world, and significantly affects patient quality of life. 2,13,14 The global social and economic burden of COPD is projected to increase, due to aging populations and the continued use of tobacco and exposure to biomass fuels, 15,16 underscoring the need for more effective management of this disease. While 12 million people in the US are known to have COPD, it is estimated that up to 24 million may have impaired lung function and undiagnosed disease. 17 In 2010, the cost of COPD in the US was projected to be ~$49.9 billion, which included ~$20 billion in indirect costs (eg, loss of work productivity and earnings) and $30 billion in direct health care expenditures (eg, prescription medicines and emergency department visits). 17,18 To reduce symptoms, frequency, and severity of COPD exacerbations and improve health status and exercise tolerance, the GOLD strategy document 2 recommends that bronchodilators are the cornerstone of pharmacotherapy for COPD in the majority of patients. 1,2 However, physical and/or cognitive symptoms that are common in some COPD patients (eg, the elderly 19,20 ) could interfere with the proper administration of inhaled therapies via handheld inhalers, 21 resulting in insufficient dosing and jeopardizing health outcomes, reducing quality of life, and further adding to the economic burden of COPD. 2,22 Further, during exacerbations and in recovery, many COPD patients have decreased peak inspiratory flow rate (PIFR) and are unable to use handheld inhalers effectively. In these populations, inhaled therapies administered via nebulizers may offer improved symptom control 21,23 and quality of life 9 over non-nebulized bronchodilator therapy.
Pharmacologic therapy
In general, pharmacologic therapy is part of an integrated treatment approach in patients with COPD that begins with smoking cessation and vaccines (influenza and pneumococcal) for all current smokers and progresses to treatment with inhaled therapy. 24 Inhaled treatment is tailored to the patient and should be guided by the severity of COPD symptoms, risk of COPD exacerbations, drug availability, and patient response (Table 1). 2 For patients at low risk of COPD exacerbations with relatively few symptoms (eg, those in GOLD patient category A), 2 short-acting bronchodilators are available for acute relief of symptoms or for use before physical activities 25 to prevent the onset of symptoms (a longacting inhaled bronchodilator, as well as theophylline, is recommended as an alternative choice). 2 For patients with more severe symptoms or who are at a higher risk of COPD exacerbations (eg, those in GOLD patient categories B, C, or D), long-acting bronchodilators are recommended over short-acting bronchodilators for maintenance therapy to improve symptoms, exercise tolerance, and health-related quality of life and reduce the risk of exacerbations. 2 As a result, long-acting bronchodilators with or without inhaled corticosteroids (ICS) are the first-or second-choice drugs for the majority of patients with COPD. 1 Although handheld pMDIs or DPIs are effective in most patients with COPD, cognitively impaired and elderly patients may benefit more from the use of a nebulizer, since these patient populations may have difficulty synchronizing inhalation with inhaler actuation or may be unable to generate a sufficient inspiratory flow rate against the resistance of a breath-activated DPI to generate an effective aerosol. 6,22,[26][27][28] SMIs are compact portable multidose inhalers that use liquid formulations similar to those in nebulizers but, like MDIs and DPIs, require manual manipulation to generate the aerosol and special breathing techniques for effective delivery of the aerosolized medication to the lungs. 29 The choice of therapy, however, ultimately depends on a wide range of factors, including the prescribing physician, the availability of specific drug/device pairings, drug cost, and patient preferences and satisfaction. 3,22,26,28,30,31 Each of the delivery devices that are available for administering drugs to patients with COPD (eg, pMDIs, DPIs, SMIs, and nebulizers) has advantages and disadvantages (Table 2). 3,6,29,31,32 Nebulized drug delivery In patients with COPD, nebulizers are an alternative to pMDIs and DPIs for providing inhaled therapy, provided the drug is available and chemically stable in liquid form ( Figure 1). 6,31 Despite some drawbacks associated with nebulizers (eg, variably long treatment times and daily cleaning), current evidence suggests that the efficacy of treatments administered to patients with moderate to severe COPD via nebulizers is similar to that observed with pMDIs and DPIs. [3][4][5][6] Further, market analysis indicates that, in the US, ~45% of patients with COPD have a nebulizer, 69% of whom use it on a regular basis. 6 Several options exist for the type of nebulizer (eg, jet, ultrasonic, and vibrating mesh), with many models commercially available (Figure 1). The Akita ® (Vectura, Chippenham, UK) jet nebulizer individualizes aerosol delivery using the Notes: a Not all DPIs are high-resistance inhalers, but even the low-resistance inhalers (eg, Breezhaler ® ; Novartis, Basel, Switzerland), require a relatively high inspiratory flow compared with higher resistance devices to generate a comparable pressure drop across the resistance of the device in order to de-agglomerate the powder and generate an effective aerosol. 98 b Breath-actuated MDIs address this concern and are available in some countries. c Higher (~50%) deposition occurs with solution HFA MDIs (eg, beclomethasone HFA and flunisolide HFA). Abbreviations: DPI, dry powder inhaler; HFA, hydrofluoroalkane; MDI, metered dose inhaler; pMDI, pressurized metered dose inhaler; SMI, slow mist inhaler.
2588
Tashkin adaptive aerosol delivery (AAD) control system ( Figure 2), 33 which results in high efficiency and low variability in aerosol drug delivery to patients. Despite these benefits, however, the Akita, in common with older jet nebulizers, is a large, poorly portable nebulizer that has a longer (10 minutes) treatment time than the newer vibrating mesh nebulizers. 34 The Trek ® S (PARI, Midlothian, VA, USA) portable jet nebulizer is a convenient alternative to larger, more powerful tabletop compressors. In a comparative study of four portable nebulizer systems, the Trek S delivered 33% more respirable dose than the next best system, Mini Elite™ (Philips Healthcare, Andover, MA, USA). 35 Figure 1 examples of commercially available nebulizers that incorporate newer aerosol generating technologies. Notes: Akita ® Jet (Courtesy of ventura, UK) and the I-neb ® (Courtesy of Philips Healthcare, USA) employ AAD technology to deliver and monitor nebulizer treatments. Trek ® S (Courtesy of PARI, USA; Trek ® S is a trademark of PARI Gmbh and its affiliates) is a portable jet nebulizer. MicroAir ® Ne-U22 (Courtesy of Omron, USA) and the eFlow ® (Courtesy of PARI, USA; eFlow ® is a trademark of PARI Gmbh and its affiliates) are vibrating mesh aerosol nebulizers. Respimat ® is a high-efficiency soft mist inhaler (Reproduced with permission from Boehringer Ingelheim Pharmaceuticals, Inc. Respimat ® is a trademark of and/or used under license from Boehringer Ingelheim International GmbH or its affiliated companies. Materials may also be subject to copyright protection). Aeroneb ® Go (Courtesy of Philips Healthcare) is an ultrasonic nebulizer. All of these devices are approved for use in the US. Abbreviation: AAD, adaptive aerosol delivery. The Aeroneb ® Go (Philips Healthcare) is a portable, compact, handheld ultrasonic nebulizer that is easily assembled, silent, and has short (5 minutes) treatment duration. 3,6,34 The eFlow ® (PARI) is a battery-operated, compact, portable vibrating mesh nebulizer that has been shown to improve patient compliance due to its comparatively short (5 minutes) treatment time. 36 MicroAir ® NE-U22 (Omron, Chicago, IL, USA) is a mesh nebulizer that provides efficient aerosol drug delivery with a predominantly fine particle fraction. 3,6,34 Like the Aeroneb Go and eFlow devices, the MicroAir is expensive and can be difficult to maintain, as it requires disassembly and cleaning after each use to prevent clogging of the mesh apertures. 34 The I-neb ® (Philips Healthcare) AAD nebulizer is a small, lightweight, battery-powered, silent smart nebulizer that combines mesh and AAD technologies to deliver a precise, reproducible dose. 6,34 With AAD technology, automated timing of aerosol delivery (based on the patient's breathing pattern) improves the precision and reproducibility of dosing 34 ( Figure 2) and, compared with previous nebulizers without AAD, significantly improves dyspnea and fatigue in patients with COPD. 37 A disadvantage of the newer mesh nebulizers is that little information is available concerning the ideal dose of the bronchodilator solution to add to the nebulizer. Consequently, the potential for overdosing exists if the same dose of the bronchodilator that is conventionally used with jet nebulizers is added to these newer nebulizers. To address this concern, device manufacturers are developing a new generation of closed-system mesh nebulizers that will accept only the ampule containing the specific drug approved for use with a specific device based on the demonstration of safety and efficacy. 31 The Respimat ® (Boehringer Ingelheim, Ingelheim, Germany) is an SMI that delivers a slow-moving mist, allowing the inhalation of medication independent of inspiratory effort 38 (ie, via the release of stored energy from a tensed spring when the tension is released by pushing a button). Although not strictly classified as a nebulizer, the Respimat device, a compact handheld aerosol delivery device similar in size to MDIs and DPIs, is included here because it shares several performance characteristics with the nebulizers discussed earlier, such as liquid formulation, propellant-free function, use of mechanical energy for actuation, generation of an aerosol with a predominantly fine particle fraction (micronebulized), and lack of dependence on high inspiratory flow rates. 38 However, in contrast to more conventional nebulizers for which only tidal breathing is required, use of the Respimat requires a special breathing technique (full expiration followed by full inhalation and breath holding). 38 Some coordination between inhalation and actuation is also necessary, although the timing for such is more forgiving with the Respimat than for MDI devices, since the aerosol delivered from the Respimat lasts 1.5 seconds (as opposed to a fraction of a second from an MDI).
Nebulized pharmacologic therapy
Many of the drugs used for the treatment of COPD were initially approved for use in pMDIs or DPIs 39 and are now available in solution form for use with nebulizers ( Table 3). The long-acting agents are indicated for maintenance treatment of COPD-associated airflow obstruction, while short-acting bronchodilators are indicated for acute relief of bronchospastic symptoms of COPD. Clinical trials generally have demonstrated significant improvement in forced expiratory volume in 1 second (FEV 1 ) over the dosing interval and reduction in rescue medication use with nebulized therapy.
Long-acting β 2 -agonists (LABAs) Arformoterol
Nebulized arformoterol tartrate is of potential benefit to patients with hyperinflation and low PIFR. 40 Arformoterol is safe in combination therapy with certain handheld inhalers (eg, ICS, long-acting muscarinic antagonists [LAMAs], and short-acting β 2 -agonists [SABAs]), but it is contraindicated in combination with a handheld inhaled LABA (alone or in combination with an ICS or an LAMA). A 12-month Phase 4 trial found no increased risk of respiratory death or COPD exacerbation-related hospitalizations with nebulized arformoterol treatment. 41 Being a single enantiomer of formoterol, arformoterol may have hypothetically more potent bronchodilator properties, microgram per microgram, than racemic formoterol fumarate, but no major clinical differences between the two drugs have been observed in patients with COPD. 42 Maintenance therapy with nebulized arformoterol or formoterol demonstrated a 37% reduction and a 42% reduction in rescue albuterol use, respectively. 43,44 Partial tolerance to the bronchodilator effect of arformoterol was noted after 6 weeks of therapy, but the reduction in bronchodilator efficacy did not progress beyond 6 weeks and was not considered clinically significant. 44 Finally, arformoterol can improve lung function in combination with LAMAs. In patients with COPD who were receiving twice-daily nebulized arformoterol, tiotropium bromide given in combination with arformoterol produced significantly greater bronchodilation than either arformoterol or tiotropium monotherapies (P0.001). 45 Formoterol Formoterol differentiates from some other β 2 -agonists by its rapid onset of significant bronchodilation within 5 minutes of administration. 46
2590
Tashkin formoterol fumarate significantly increased FEV 1 relative to placebo (P0.001) when administered for 12 weeks and had similar efficacy and safety compared with the original formoterol fumarate dry powder formulation. 46 Quality of life at week 12, as measured by the St George's Respiratory Questionnaire, demonstrated significant and clinically meaningful improvements in total score, symptom, and impact scores for formoterol vs placebo. Patients treated with formoterol reported greater treatment satisfaction and perception of disease control compared with treatment with short-acting bronchodilators delivered 4 times daily. 48 Furthermore, similar to arformoterol, 45 nebulized formoterol significantly increased bronchodilation in patients receiving the LAMA tiotropium bromide, 49 which indicates that formoterol can improve lung function in combination with antimuscarinics. With regard to tachyphylaxis to the bronchodilator effect of formoterol + tiotropium, tachyphylaxis was not observed during 6 weeks of formoterol add-on treatment in patients receiving tiotropium maintenance therapy, 50,51 which is consistent with 12-week trials that did not show any tolerance to the effect of formoterol alone in patients with COPD. 46
Olodaterol SMI
Olodaterol hydrochloride SMI is a long-term, once-daily maintenance treatment for controlling symptoms in adults with COPD. 52,53 In Phase 3 trials, once-daily olodaterol improved lung function (FEV 1 ) compared with placebo over 48 weeks of treatment, with bronchodilation being achieved and maintained within the 24-hour dosage interval, supporting its once-daily administration. 52,54 Olodaterol SMI is not indicated to treat either acute deterioration of COPD or asthma.
LAMA Tiotropium SMI
Tiotropium bromide SMI provides a solution form of tiotropium bromide 55 that is efficacious at lower doses compared with the tiotropium bromide HandiHaler ® (Boehringer Ingelheim). 56 In patients with COPD, tiotropium SMI improved lung function, health-related quality of life, and dyspnea, reduced acute exacerbations of COPD, and was as effective and safe as the tiotropium HandiHaler 57,58 Tiotropium is generally well tolerated in patients with COPD, but antimuscarinic side effects (eg, dry mouth) are among the most commonly reported adverse events. 59
LABA-LAMA fixed-dose combination
Tiotropium-olodaterol SMI Tiotropium bromide-olodaterol hydrochloride SMI, a fixeddose combination daily maintenance treatment for patients Short-acting muscarinic antagonist (SAMA) Ipratropium Ipratropium bromide, an SAMA in a nebulized inhalation solution administered either alone or with other bronchodilators (eg, β 2 -agonists), is indicated as a bronchodilator for the maintenance treatment of bronchospasm associated with COPD, including chronic bronchitis and emphysema, when administered on a regularly scheduled four times daily schedule. 68 In 12-week clinical studies in patients with COPDassociated bronchospasm associated with COPD, significant improvements in pulmonary function (FEV 1 increases of 15% or more) occurred within 15-30 minutes and persisted for periods of 4-5 hours in the majority of patients.
SABA-SAMA fixed-dose combination Albuterol-ipratropium
Nebulized albuterol sulfate-ipratropium bromide, a fixeddose combination product, is indicated for the treatment of bronchospasm associated with COPD in patients requiring more than one bronchodilator. 69,70 Research has shown that patients with COPD treated with albuterol-ipratropium have lower hospital expenditures and therapy interruptions than patients taking the individual components as dual single agents (DSAs). 71 In a population-based retrospective claims analysis, patients who were taking nebulized albuterol-ipratropium (n=468) had 31% fewer emergency department visits and costs compared with patients taking a DSA (P=0.03 and P0.001, respectively). In addition, the albuterol-ipratropium cohort was associated with statistically fewer individuals who reported treatment interruptions (10%; P=0.003).
Albuterol-ipratropium SMI
Albuterol sulfate-ipratropium bromide SMI is indicated for patients with COPD on a regular aerosol bronchodilator who continue to have evidence of bronchospasm and who require a second bronchodilator. 72 In a controlled clinical study, 652 patients with moderate to severe COPD received either albuterol, ipratropium, or albuterol-ipratropium SMI for 85 days. 70 Over the course of the study, the acute pulmonary function response (peak expiratory flow rate) was significantly better with albuterol-ipratropium compared with albuterol or ipratropium alone; quality of life and symptoms, however, were unchanged over the course of the study in all treatment groups. The use of an SAMA either as a single agent (eg, ipratropium) or in combination with a short-acting β-agonist (eg, albuterol) is not recommended in patients receiving concomitant therapy with an LAMA because of concern regarding possible additive anticholinergic side effects and, hypothetically, displacement of the more effective long-acting agent by the short-acting drug from the muscarinic receptor.
Nebulized therapy in development
Despite the benefits of combination therapies, the late-stage development pipeline of nebulized medications for the treatment of COPD currently comprises two LAMA monotherapies, SUN-101 (Sunovion, Marlborough, MA, USA) and TD-4208 (Theravance, South San Francisco, CA, USA), that could provide improvements over existing drugs (Table 4).
SUN-101
SUN-101, a soluble glycopyrrolate bromide formulation in Phase 3 development, is rapidly (within 2 minutes) delivered to the lungs using a novel custom-designed, portable electronic nebulizer device (eFlow 74 with no clinically relevant changes in heart rate, systolic and diastolic blood pressure, or in electrocardiographic parameters including QTc interval. 73 The SUN-101 Phase 3 program consists of three clinical trials that will enroll ~2,340 adults with moderate to very severe COPD. [75][76][77] Revefenacin (TD-4208) Revefenacin is a nebulized LAMA with similar potency to tiotropium bromide but with less potential for antimuscarinic side effects (eg, dry mouth). 78,79 Revefenacin, administered via the PARI Trek S nebulizer, is in clinical development as a once-daily maintenance treatment for COPD. The results of several Phase 2 studies support the ongoing Phase 3 program. Evaluation of the pharmacokinetics of revefenacin (n=127) demonstrated low plasma concentrations after inhaled administration, consistent with high systemic clearance and a lack of systemic antimuscarinic activity. 80 A randomized, crossover, 7-day, multiple-dose study demonstrated that the bronchodilator effect of once-daily revefenacin was sustained for more than 24 hours in patients with COPD. 81 In a 28-day dose-ranging Phase 2 study in patients with COPD, revefenacin-treated patients' use of rescue medication was significantly reduced by more than one puff per day in a dose-dependent manner (P0.01). 82 The Phase 3 program consists of three clinical trials that will enroll 2,000 patients with moderate to very severe COPD [83][84][85] and is designed to support regulatory approval of the drug in the US.
Discussion
With patients becoming increasingly satisfied with nebulized drug delivery, 9 improved integration of nebulizers and nebulized therapies into the COPD treatment paradigm should lead to improved clinical and health economic outcomes for patients with COPD. Certain COPD patient populations may especially benefit from the use of nebulizer therapy (eg, patients with low PIFR, the elderly, and those with cognitive or visual impairment or diminished manual dexterity). It is therefore important to select the appropriate device for each patient, particularly in older or more severely impacted patients who may be unable to use handheld devices reliably or those who prefer the feeling of control with a nebulized product. 48 Many emerging COPD medications employ portable nebulizers that are typically battery operated, making them less cumbersome to carry. Compared with pMDIs and DPIs, these nebulizers require no hand-breath coordination or extra effort during inhalation. 6 Further, the wider availability of high-efficiency nebulizers will ensure accurate delivery of emerging nebulized medications in patients with COPD, which may lead to further reductions in symptoms and exacerbation rates in these patients. However, safety and efficacy studies will be required to define the optimal doses of medications using these high-efficiency nebulizers. Moreover, patient education is crucially important to foster adherence to regular use of nebulized long-acting bronchodilators as part of maintenance therapy, rather than relying on short-acting nebulized agents that should be reserved for rescue treatment of acute symptoms.
When evaluating a nebulized drug delivery option for patients with COPD, important considerations include the availability of specific drug/nebulizer pairings, the need for drug combinations, the ability to use the selected device correctly, drug/nebulizer cost and reimbursement, patient preference and satisfaction, and clinical scenario. 3,22,26,28,30 Nebulized drug delivery is generally preferred by patients discharged from hospitals after an inpatient stay, who have demonstrated consistent difficulty using handheld inhalers, and who have impaired manual dexterity, impaired cognition, or chronic muscle weakness. 6 In these scenarios, the benefits of nebulization therapy can outweigh potential inconveniences and lead to improved adherence and
2593
COPD and nebulizers outcomes in patients with COPD. 9 For some patients, use of both a nebulizer (as maintenance therapy) and a handheld inhaler (as rescue medication, particularly when outside the home) may provide the best combination of efficacy and convenience. 6,26,86 Long-acting agents formoterol fumarate and arformoterol tartrate, which currently serve as the mainstays of nebulized maintenance therapy for COPD, have demonstrated a significant reduction in FEV 1 , but there are no head-to-head clinical trials comparing the efficacy and safety of these two nebulized therapies. While tolerance (tachyphylaxis) to the bronchodilator effect of arformoterol has been reported in clinical trials, 40,44,87 no other clinical manifestations of tolerance were evident. In contrast, clinical trials with nebulized formoterol failed to show any evidence of tolerance, as indicated by maintained FEV 1 AUC and reduced rescue inhaler use with up to 12 weeks of treatment. 46 Combination therapy involving two long-acting bronchodilators with differing mechanisms of action is recommended in patients whose COPD is not well controlled with one drug alone. 1,88 LABA and LAMA combinations, for example, have shown additive bronchodilator effects at doses used for monotherapy without additional safety concerns 89 and may increase patient adherence. 90 Approval of the two nebulized LAMA compounds in Phase 3 clinical trials, SUN-101 and TD-4208, will likely increase the use of combination LABA/LAMA nebulized therapy, although a fixed-dose LABA/LAMA combination nebulized product would also be welcome from a patient compliance perspective. Further, the development of a nebulized version of the widely used fixed-dose combination therapy, ICS/LABA, would benefit patients who need or prefer nebulized treatments.
The recent approval of tiotropium-olodaterol SMI 62 illustrates that demand for combination therapy is driving device innovation. Cosuspension-based pMDIs, for example, are in development, 89,91,92 which may facilitate further innovation in fixed-dose combination inhaler products. For example, the US Federal Drug Administration recently approved a novel LAMA/LABA cosuspension-based pMDI (glycopyrrolate and formoterol fumarate) for patients with COPD. 93 Triple therapy for COPD (ie, treatments containing LABA, LAMA, and ICS) has also been proposed as a convenient treatment option for COPD. 94,95 Indeed, the first triple inhaler containing formoterol, tiotropium, and ciclesonide is already on the market in India. 96 Future treatment of patients with COPD will require the continued development of novel nebulizer devices and drugs for patient groups and clinical scenarios where existing pMDI/DPI therapy is inadequate. Health care providers should stay up to date regarding emerging nebulized treatment options that could provide additional clinical benefits for their patients. In daily practice, prescribing the most appropriate nebulized therapy should take into consideration the available drug formulations, combinations, and devices, as well as the patients' pulmonary function, skills, and preferences. Thus, health care providers and patients together can optimize the benefits of available nebulized treatments for patients with COPD.
Publish your work in this journal
Submit your manuscript here: http://www.dovepress.com/international-journal-of-chronic-obstructive-pulmonary-disease-journal The International Journal of COPD is an international, peer-reviewed journal of therapeutics and pharmacology focusing on concise rapid reporting of clinical studies and reviews in COPD. Special focus is given to the pathophysiological processes underlying the disease, intervention programs, patient focused education, and self management protocols.
This journal is indexed on PubMed Central, MedLine and CAS. The manuscript management system is completely online and includes a very quick and fair peer-review system, which is all easy to use. Visit http://www.dovepress.com/testimonials.php to read real quotes from published authors. | 2018-03-26T21:34:54.307Z | 2016-10-18T00:00:00.000 | {
"year": 2016,
"sha1": "4c8bd2fe7c90c4765b1f11ad9241bc864a22bb62",
"oa_license": "CCBYNC",
"oa_url": "https://www.dovepress.com/getfile.php?fileID=33020",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3e388248d8e45f8312bcc1936733d0068e9d3735",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
55996220 | pes2o/s2orc | v3-fos-license | Electron Capture Processes in Intermediate mass Stars
All material supplied via JYX is protected by copyright and other intellectual property rights, and duplication or sale of all or part of any of the repository collections is not permitted, except that material may be duplicated by you for your research use or educational purposes in electronic or print form. You must obtain permission for any other use. Electronic or print copies may not be offered, whether for sale or otherwise to anyone who is not an authorised user. Electron Capture Processes in Intermediate Mass stars Idini, Andrea; Brown, A.; Langanke, K.; Martínez-Pinedo, G.
Introduction
Electron capture on nuclei is a fundamental process in the in the late stage of stellar evolution [1,2]. These processes are particularly crucial in the last phases of intermediate-mass stars (M≈ 8 − 10 M ) where double electron capture on even-even nuclei deplete the O-Ne-Mg core of electrons. This decreases the electron-degeneracy pressure counterbalancing the gravitational collapse, setting the stage for the onset of an electron capture supernova. [3 -5]. In other words the Chandrasekar Mass can be considered proportional to the electron abundance (M C 2.8M ×Y e ), hence the processes modifying the Y e of the environment are key to understand the collapse of the star core.
Inside the star, the high density of the environment increases the electron Fermi energy (k F ∝ ρ 1/3 ) enabling electron captures that are energetically forbidden in vacuum and blocking the phase space of beta decays, reducing the decay rate respect to the vacuum one observed in the lab. In a recent study [6] Martínez-Pinedo and collaborators have evaluated the electron-capture and betadecay rates for several key nuclei for the onset of the electron capture supernova, showing that the forbidden transitions can become dominant over the allowed ones, for relevant density and temperature conditions . However for the case of the forbidden 20 Ne(2 + , gs) + e − → 20 F(0 + , gs) + ν e electron capture, and 20 F(2 + , gs) → 20 Ne(0 + , gs) + e − +ν e beta decay, only the experimental upper limit has been used. We provide here a shell-model calculation of the second-forbidden transition between the two ground states, estimating the Branching Ratio for this decay channel that can eventually be probed experimentally. The strength of the e-capture and β -decay in the astrophysical scenario is also discussed.
Second-Forbidden β -decay
Beta-decay and electron capture processes are classified according to the angular momentum unit of the emitted leptons. The case in which the emitted leptons are in s-wave state is called "allowed" transition, higher L of the emitted leptons will be referred with higher degree of "forbidness" in the decay. p-wave leptons will imply a first-forbidden, d-wave a second-forbidden, etc., with consequent reduction of the transition rate. Moreover due to the coupling of the nuclear angular momentum with the spin, at one ∆L can correspond up to three ∆J = L − 1, L, L + 1, where case of maximum ∆J is called "unique" since a single nuclear matrix element is dominant. The selection rule on parity implies ∆π = (−1) ∆L (cf. [7,8]).
In a beta decay the energy spectrum of an emitted electron in function of the energy is a continuous function due to the three body nature of the decay, and is given by that is called shape function. In the following we will consider m e c 2 = 1 and represent the energies in units of electron mass. Q i f is the nuclear Q-value (in units of m e ) between the final f and initial i states, and F(E) is the Fermi function taking into account the electromagnetic interaction between the electrons and the nucleus. C β i f (E) is the shape factor, that is given by the reduced transition
PoS(NIC XIII)002
Electron Capture Processes in Intermediate mass Stars A. Idini probability for the nuclear transition, where g A is the weak axial coupling constant, H β is the transition hamiltonian (e.g. for Gamow is constant respect to energy for allowed decays, and has a polynomial energy dependence of increasing order for higher forbidness of the decay.
The Decay Rate of the transition is then given by where K is a constant that can be determined from superallowed Fermi transitions K = 6144 ± 2 s [9]. 20 F β -decay to 20 Ne has an half life of 11.07 s. The decay is dominated by the allowed transition from the 2 + ground state of 20 F to the 2 + excited state at 1.634 MeV excitation energy of 20 Ne [10]; a non-unique first-forbidden decay branch to the 2 − excited state at 4.967 MeV excitation energy of 20 Ne has been also observed, with 9 × 10 −4 Branching Ratio [11]. The decay 20 F(2 + , gs) → 20 Ne(0 + , gs), which is non-unique second forbidden, is yet to be measured and only an upper limit to the Branching Ratio = 10 −5 is experimentally known [12].
While experimental efforts are undergoing to measure this transition here we give an estimate based on shell model wavefunctions for sd-shell nuclei described in [13]. The USDB Hamiltonian [14] is diagonalized in the 1s and 0d configuration space, considering 0s and 0p shells fully occupied.
Then, making use of the transition density expressed in the harmonic oscillator basis, we use the description of [15,16] of the forbidden transition matrix elements in terms of nuclear matrix elements in order to calculate the shape factor and then the decay rate making use of Eq. (2.3).
The resulting electron energy spectrum N β gs,gs (E) is shown in Fig. 1, and the evaluated total Branching Ratio is 1.3 × 10 −6 . However the allowed 2 + → 2 + transition will be dominant up to the maximum emitted electron Energy at 5.901 MeV. The experimental effort to measure this transition will eventually have to focus on the sector between 5.9 and 7.5 MeV that contains ≈ 10% of the total forbidden decay's emission strength.
Electron capture in intermediate mass stars
In recent simulations [4] has been pointed out that the typical range of temperature and densities for the onset of electron-capture supernova are T ≈ 10 8 − 10 10 K (≈ 10 3 − 10 5 eV), ρ ≈ 10 9 − 10 10 g/cm 3 that imply an electron chemical potential µ e ≈ 5 − 11 MeV.
Following the formalism in [6], at finite temperature there is a probability e −E i /kT of thermally exciting a state i. There is then a competition between temperature, so the excitation of the 20 Ne(2 + , 1.64 MeV) state thus enabling the allowed transition to the 20 F(2 + , gs), and the slow decay rate of the forbidden transition directly from the 20 Ne(0 + , gs). At high temperature the probability of thermal excitation will take over the one of the forbidden ground-state to ground-state transi- tion, at low temperature the ground state to ground state transition will be the only available within a certain range for the chemical potential, so until the 1 + excited state of 20 F will be accessible.
Also the decay rate has to consider the phase space blocked or allowed by the electron distribution where E l is the energy threshold, in units of electron mass, which is given by and S e is the electron distribution function, that considering the conditions of the electron gas at temperature T and chemical potential µ e , follows the Fermi-Dirac distribution The estimated decay rates, in function of the density, are compared with the previous results of [6] in Fig. 2 where can be seen that, while the energy dependence of the shape factor does not play a relevant role in this case, the reduction of almost one order of magnitude respect to the current experimental upper limit imply a noticeable effect. However the forbidden transition remain dominant for a relevant range of densities and temperatures.
From Eqs. (3.1,3.2) we can consider the decay rate in function of density and temperature, and thus verify in which conditions a certain transition will be dominant over the others (cf. Fig. 3,4).
PoS(NIC XIII)002
Electron Capture Processes in Intermediate mass Stars A. Idini
Acknowledgments
This work has been supported by the Helmholtz Association through the Nuclear Astrophysics Virtual Institute (VH-VI-417) and the Helmholtz International Center for FAIR within the frame- | 2018-12-08T01:33:28.028Z | 2015-10-09T00:00:00.000 | {
"year": 2015,
"sha1": "d06f1cb5959b3a3e4243c558d0cdab2cfa631d4b",
"oa_license": "CCBYNCSA",
"oa_url": "https://jyx.jyu.fi/bitstream/123456789/47929/1/nic%20xiii002.pdf",
"oa_status": "GREEN",
"pdf_src": "Anansi",
"pdf_hash": "1a25c46c40ed1bc5a1afb313b3339377a9e48926",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
219702948 | pes2o/s2orc | v3-fos-license | The 3′ processing of antisense RNAs physically links to chromatin-based transcriptional control
Significance RNA-mediated chromatin regulation is central to gene expression in many organisms. However, the mechanisms by which RNA influences the local chromatin environment are still poorly understood. Here, we show how RNA 3′ processing factors, which promote proximal polyadenylation of an Arabidopsis antisense transcript, physically associate with the chromatin modifiers FLD/LD/SDG26. The chromatin modifiers exist in a protein complex that inhibits H3K4me1 and H3K36me3 accumulation. By antagonizing transcription, the FLD/LD/SDG26-containing complex promotes H3K27me3 accumulation, reducing transcriptional initiation and elongation rates. This cotranscriptionally mediated chromatin silencing mechanism may be widely relevant for gene regulation in many organisms.
Noncoding RNA plays essential roles in transcriptional control and chromatin silencing. At Arabidopsis thaliana FLC, antisense transcription quantitatively influences transcriptional output, but the mechanism by which this occurs is still unclear. Proximal polyadenylation of the antisense transcripts by FCA, an RNA-binding protein that physically interacts with RNA 3′ processing factors, reduces FLC transcription. This process genetically requires FLD, a homolog of the H3K4 demethylase LSD1. However, the mechanism linking RNA processing to FLD function had not been established. Here, we show that FLD tightly associates with LUMINIDEPENDENS (LD) and SET DO-MAIN GROUP 26 (SDG26) in vivo, and, together, they prevent accumulation of monomethylated H3K4 (H3K4me1) over the FLC gene body. SDG26 interacts with the RNA 3′ processing factor FY (WDR33), thus linking activities for proximal polyadenylation of the antisense transcripts to FLD/LD/SDG26-associated H3K4 demethylation. We propose this demethylation antagonizes an active transcription module, thus reducing H3K36me3 accumulation and increasing H3K27me3. Consistent with this view, we show that Polycomb Repressive Complex 2 (PRC2) silencing is genetically required by FCA to repress FLC. Overall, our work provides insights into RNA-mediated chromatin silencing.
non-coding RNA | chromatin | polycomb | FLC | Arabidopsis B oth long and short noncoding chromatin-associated RNA transcripts have emerged as key regulators of the chromatin environment (1). Detailed mechanisms of how 21-to 24-nt RNAs initiate and maintain heterochromatin have been elucidated (2). However, less is understood about the mechanisms linking long noncoding RNA, chromatin regulation, and transcription. The most well-studied example is the role of X inactive specific transcript (Xist) in X chromosome inactivation (3). Different repeats on Xist recruit an array of protein factors that silence and conformationally alter the X chromosome (4). The RNA-binding protein SPEN binds the Xist A repeat and has recently been shown to transcriptionally down-regulate X-linked genes and trigger Polycomb silencing in a process requiring nucleosome remodelers and histone deacetylases (5). Similar RNA-mediated chromatin mechanisms act at the single locus Arabidopsis FLOWERING LOCUS C (FLC), which encodes a MADS-box transcription factor that acts as a floral repressor in Arabidopsis thaliana. A well-understood process involving FLC is vernalization, the cold-induced epigenetic silencing that occurs during winter, enabling plants to flower in spring. Cold induces a set of antisense long noncoding transcripts at the FLC locus, called COOLAIR, which mediate transcriptional downregulation of FLC, as a prelude to a Polycomb-induced epigenetic switch (6). However, in a second less well understood mechanism at FLC, transcription is quantitatively regulated by COOLAIR antisense transcript processing linked to chromatin regulation. This is mediated by a set of genes grouped into the autonomous floral pathway (some of which are putative equivalents of SPEN), which have widespread transcriptional functions in the Arabidopsis genome through RNA-mediated chromatin pathways (7).
The autonomous pathway component FCA is an RNA-binding protein that mediates alternative 3′ end processing of COOLAIR transcripts (8). FCA associates with a coiled-coil protein, FLL2, which promotes formation of liquid-like nuclear condensates that appear to concentrate 3′ processing factors and change their dynamics at specific poly(A) sites (9). The proximal processing of COOLAIR results in an FLC chromatin environment that reduces FLC transcriptional initiation and elongation rates (10). This process requires FLOWERING LOCUS D (FLD), which is a homolog of the H3K4 demethylase LSD1 (11). Nevertheless, how FCA-mediated RNA processing links to FLD remained to be elucidated.
We have investigated this mechanism further, and here we identify two proteins, LUMINIDEPENDENS (LD) and SET DOMAIN GROUP 26 (SDG26), that tightly associate with FLD. Like FLD, LD and SDG26 function genetically in the FLC-repression pathway with FCA. We find that SDG26 transiently interacts with FY, one of the RNA 3′ processing factors that associates with FCA, physically linking FCA to FLD. Through genetic and chromatin immunoprecipitation analysis, we determine that loss of the FLD/LD/SDG26, or FCA, leads to overaccumulation of histone modifications, including H3K4me1/ me2 and H3K36me3. Thus, we can now physically link RNA 3′ Significance RNA-mediated chromatin regulation is central to gene expression in many organisms. However, the mechanisms by which RNA influences the local chromatin environment are still poorly understood. Here, we show how RNA 3′ processing factors, which promote proximal polyadenylation of an Arabidopsis antisense transcript, physically associate with the chromatin modifiers FLD/LD/SDG26. The chromatin modifiers exist in a protein complex that inhibits H3K4me1 and H3K36me3 accumulation. By antagonizing transcription, the FLD/LD/SDG26containing complex promotes H3K27me3 accumulation, reducing transcriptional initiation and elongation rates. This cotranscriptionally mediated chromatin silencing mechanism may be widely relevant for gene regulation in many organisms. processing of the COOLAIR transcripts with a chromatin modification complex that influences H3K4me1-H3K36me3 and transcriptional activity at the locus. By antagonizing transcription, FLD/LD/SDG26-containing complex promotes H3K27me3 accumulation, consistent with a requirement for Polycomb Repressive Complex 2 in the FCA-mediated repression of FLC. We propose that FLD/LD/SDG26 influences an active transcription module that antagonizes PRC2 function.
Results
FLD Associates with LD and SDG26. We previously performed a suppressor mutagenesis screen and identified FLD as one of the components required for FCA-mediated FLC regulation (11). To gain insights into how FLD represses FLC transcription, we used a proteomic approach to search for FLD interactors. We immunopurified FLD from a transgenic line expressing FLD tagged at the carboxyl terminus with FLAG-TAP epitopes (FLD-FLAG-TAP) (10). Mass spectrometric analyses of the FLD immunoprecipitation revealed that FLD tightly associates with LUMINIDEPENDENS (LD) and a SET domain protein, SDG26, in vivo ( Fig. 1A and Dataset S1). Purifications from transgenic plants expressing GFP-tagged versions of each protein but not GFP only or Col-0 enriched the other two proteins of the complex ( Fig. 1A and Datasets S2 and S3). The interaction between FLD and SDG26 was confirmed by coimmunoprecipitation (co-IP) in stable transgenic lines (Fig. 1B). Loss of LD or SDG26 caused a reduction in FLD protein levels ( Fig. 1C and SI Appendix, Fig. S1). One possible explanation for this is that the interaction between FLD and LD/SDG26 may be required for FLD stability.
LD was one of the first flowering regulators to be cloned based on a late-flowering phenotype of a T-DNA insertion (12), but how its function connected to other autonomous pathway components was unclear. LD encodes a protein carrying an N-terminal homeodomain (SI Appendix, Fig. S2A) and has been reported to bind DNA without sequence specificity (13). SDG26 is a close homolog of SDG8 (SI Appendix, Fig. S2A), the major histone H3K36 methyltransferase in the Arabidopsis genome; however, in vitro and in vivo analysis so far has provided no evidence that SDG26 is an H3K36 methyltransferase. In fact, sdg26 mutants show an opposite (late-flowering) phenotype compared to sdg8 (early flowering) through opposite effects on FLC expression, suggesting different functions or indirect effects (14,15). We tested the subcellular localization of FLD, LD, and SDG26 in stable transgenic lines and found that they are all nuclear-localized (SI Appendix, Fig. S2B).
LD and SDG26 Function Genetically in the Same Pathway as FLD and FCA. Similar to fld mutant, loss-of-function mutations of LD and SDG26 showed a late-flowering phenotype and increased FLC expression ( Fig. 2 A-C). In order to dissect the genetic relationships between FLD, LD, and SDG26, we combined the mutations to create double mutants. The results showed that fld ld, fld sdg26, and ld sdg26 did not give any additional lateness ( Fig. 2A) or increase in spliced FLC RNA levels ( Fig. 2B), but did lead to higher unspliced FLC RNA levels ( Fig. 2C), compared to the single mutants. The inconsistency between spliced and unspliced FLC suggests that, similar to Paf1C (16), FLD, LD, and SDG26 might have a concerted role in regulating the release of nascent FLC transcripts.
FLD has been shown to function in the same genetic pathway and downstream of FCA in that fld is not additive to fca with respect to flowering time, and fld suppressed the ability of FCA to down-regulate FLC (11). To test whether LD and SDG26 behave in the same way as FLD, we first combined ld and sdg26 with fca and found no additivity compared to fca with respect to flowering time ( Combination of a 35S-FCA transgene, with and without the FLC activator FRIGIDA, with ld and sdg26 mutations then showed that both mutations compromised the effect of overexpressed FCA on FLC (Fig. 2G). Taken together, these data support that FLD, LD, and SDG26 exist in a complex that functions downstream of FCA to repress FLC expression. The strong genetic interactions between FLD/LD/SDG26 and FCA raised the question of how FCA function is molecularly linked to FLD. No in vivo physical interactions of FCA with 3′ processing factors or chromatin regulators had been found until our recent analysis using a technique termed cross-linked nuclear immunoprecipitation and mass spectrometry (CLNIP-MS) (9). We found FCA interacted with both RNA and a range of proteins and, in vivo, localizes to nuclear condensates that are highly dynamic (9). Those condensates are likely to concentrate 3′ processing factors and contribute to 3′-end processing of RNAs including COOLAIR (9). We reasoned that the interaction between the FLD/LD/SDG26-containing complex and FCA, if any, would also be transient and dynamic. To this end, we applied CLNIP-MS to SDG26. Surprisingly, we found that, in addition to finding FLD and LD with high peptide counts, some 3′ RNA processing factors were also detected ( Fig. 3A and Dataset S4) in the SDG26 immunoprecipitation after cross-linking. These include FCA, as well as the RRM-containing protein FPA (8,17), FY (18,19), and Cleavage/Polyadenylation Specificity Factor 160 (CPSF160), all of which have been shown to associate with FCA and colocalize with FCA in the nuclear condensates (9). Purifications from Col-0 or a transgenic plant expressing a 35S-GFP fusion did not retrieve any of those proteins (Dataset S4). We then set out to confirm the interaction between SDG26 and FY, using an FY antibody raised in rabbits against the native recombinant protein (20). Using an SDG26-FLAG-TAP transgenic line, we performed cross-linked nuclear immunoprecipitation of SDG26 and probed against FY. The result showed that FY was readily detected (Fig. 3B). Without cross-linking, neither FY nor any of the 3′ processing factors were found in the SDG26 immunoprecipitation (Dataset S3). CLNIP-MS of LD also identified FY and FPA ( Fig. 3A and Dataset S5). These data suggest that the interactions between the FLD/LD/SDG26-containing complex and 3′ processing factors provide a physical link, so that, when 3′ RNA processing of proximal COOLAIR occurs, the FLD/LD/SDG26-containing complex is brought in to repress FLC transcription.
Loss of FLD/LD/SDG26 Results in Overaccumulation of H3K4me1 at
FLC. Our mathematical modeling and experimental evidence have shown that FLD-mediated repression of FLC is achieved in a manner consistent with a coordinated reduction of transcription initiation and Pol II elongation rates (10). Whether and how this is connected to histone modifications is not fully understood.
Arabidopsis has four homologs of human LSD1, including FLD, LDL1, LDL2, and LDL3 (21). The fld mutation led to a limited 1.5-to 2-fold increase of H3K4me2 on FLC (10, 11). More recently, the ldl2 mutation was shown to increase gene body H3K4me1, which correlated positively with gene expression (22). We therefore decided to analyze the effect of FLD, LD, and SDG26 mutations on H3K4me1 and H3K4me2 levels at FLC. Chromatin immunoprecipitation coupled with quantitative PCR (ChIP-qPCR) showed a small increase of H3K4me2 at 1 to 4 kb beyond the TSS of FLC in fld (Fig. 4 A and C), consistent with previous reports (10,11). Surprisingly, we observed a much more dramatic increase of H3K4me1 over the FLC gene body in fld ( Fig. 4 A and B). ld and sdg26 also significantly overaccumulated H3K4me1 (Fig. 4B), indicating a major role of the FLD/LD/ SDG26-containing complex in inhibiting H3K4me1 accumulation through the demethylase activity of FLD. It is also noteworthy that sdg26 accumulated more H3K4me2 than fld (Fig. 4C), suggesting a role for the FLD/LD/SDG26-containing complex in a stepwise removal of H3K4me2 and H3K4me1, with each component contributing differently to this activity. fca-9 showed a large increase in H3K4me1 and a similar increase in H3K4me2 as sdg26, in agreement with FLD/LD/SDG26 functioning genetically downstream of FCA (SI Appendix, Fig. S3 A-C). Given that SDG26 features a SET domain, a hallmark of histone methyltransferases, we sought to determine whether the FLD/LD/SDG26-containing complex, in addition to FLDmediated demethylation, could also directly alter chromatin states through SDG26-mediated histone methylation. However, we failed to detect activity of SDG26 toward recombinant Arabidopsis nucleosomes in vitro for both heterologously expressed SDG26 or FLD/LD/SDG26 complex purified from Sf9 cells, nor for the endogenous FLD/LD/SDG26-containing complex purified via FLD-FLAG-TAP purification (SI Appendix, Fig. S4). Overall, these findings suggest demethylation of H3K4 is a major activity of the complex.
SDG8 Is Epistatic to FLD/LD/SDG26 to Activate FLC. H3K4me1 is enriched at enhancers as well as gene bodies in mammalian cells (23). Recent studies suggested that H3K4me1 might fine-tune, rather than tightly control, enhancer activity and function (24)(25)(26). In plants, H3K4me1 is mainly found in gene bodies, removal of which mediates transcriptional silencing (22). Interestingly, the CW domain of Arabidopsis SDG8, an H3K36me3 methyltransferase, preferentially binds H3K4me1 (27,28), providing a mechanism to link H3K4me1 to delivery of the active histone modification H3K36me3. Consistent with this, we found loss of the FLD/LD/SDG26-containing complex, as well as FCA, led to a large overaccumulation of H3K36me3 in the FLC gene body (Fig. 4D and SI Appendix, Fig. S3D), which mirrored the change of H3K4me1 (Fig. 4B and SI Appendix, Fig. S3B). In addition, H3K27me3, the mutually exclusive histone modification of H3K36me3, was greatly reduced in the fld-4, ld, and fca mutants ( Fig. 4E and SI Appendix, Fig. S3E). Consistent with this, SDG8 ChIP did not show signal on FLC in the Col-0 background (29), where H3K4me1 was kept at a very low level (Fig. 4B). The connection between H3K4me1 and H3K36me3 raised the possibility that FLD/LD/SDG26 repressed FLC via removal of H3K4 methylation, thereby inhibiting SDG8mediated H3K36me3 and indirectly promoting the accumulation of H3K27me3. To test this possibility, we generated the fld sdg8 double mutant and found that the sdg8 mutation completely suppressed both the fld-induced higher expression of FLC (Fig. 5 A and B) and the resulting delayed flowering time (Fig. 5C). This would suggest that the FLD/LD/SDG26 repression of FLC transcription involves inhibition of SDG8 function. In comparison, the sdg8 mutation largely, but not completely, reversed the expression of FLC (Fig. 5 A and B) and flowering time (Fig. 5C) caused by fca-9, suggesting that FCA can, to a limited extent, also repress FLC via a pathway that is independent of FLD and SDG8.
FCA Requires PRC2 to Silence FLC. The above data support a model where the alternative 3′ processing of COOLAIR by FCA mediates the silencing of FLC by Polycomb Repressive Complex 2 (PRC2) via inhibiting an active transcription module consisting of H3K4me1, H3K36me3, and transcription, which antagonizes the deposition of H3K27me3 (30). We tested this model by asking whether PRC2 is required by FCA to silence FLC. We took advantage of an Arabidopsis progenitor line carrying a single insertion of a 35S::FCAγ transgene in combination with an active FRIGIDA allele, in an otherwise wild-type background, which we had used to identify mutations suppressing the ability of FCA to down-regulate FLC (11). This sensitized background enhances FLC derepression and so is an efficient way to screen for factors required for FCA function. A weak allele of clf, reduced in PRC2 H3K27me3 methyltransferase activity (31), was introduced into this 35S::FCAγ genotype. clf-81 strongly released FLC expression, much more than in the Col background (Fig. 5 D and E), supporting that FCA requires PRC2 to silence FLC. In line with our findings, Tian et al. showed that CLF enrichment at the FLC locus requires FCA function (32).
Discussion
Studying the quantitative transcriptional regulation of the A. thaliana floral repressor FLC has led us into dissection of how alternative processing of antisense transcripts regulates local chromatin environment and thus transcriptional output (7). We find that dynamic interactions between RNA-binding proteins, 3′ processing factors, and the chromatin modifiers FLD/LD/ SDG26 result in a chromatin state associated with low transcriptional initiation and slow elongation, marked by low H3K4me1, low H3K36me3, and high H3K27me3. Loss of any of the factors switches the locus to the opposing high transcriptional state, overaccumulation of H3K4me1 and H3K36me3 and reduction of H3K27me3. We propose that the FLD/LD/SDG26 exist in a complex that inhibits an active transcription module, so promoting the deposition of H3K27me3 (SI Appendix, Fig. S5). This process parallels with the cleavage and polyadenylation factor (CPF)-mediated facultative heterochromatin assembly in yeast (33), the exact mechanism of which is still unknown.
FCA associates dynamically with 3′ processing factors in FCA nuclear bodies (9). The fact that the interactions between SDG26 and 3′ processing factors were only detected after crosslinking suggested that the interactions are also dynamic, and raised the possibility that FLD/LD/SDG26 might colocalize in FCA nuclear bodies. LD, like FCA and FY, has been found to contain a prion-like domain (34) (SI Appendix, Fig. S2A), which was identified as a driver for ribonucleoprotein granule assembly (35), and LD formed distinct foci when expressed in yeast cells (34). However, under normal confocal microscopy and expressed at endogenous levels, neither FLD, LD, nor SDG26 formed obvious nuclear bodies (SI Appendix, Fig. S2B). One possible explanation is that FLD/LD/SDG26 form nuclear bodies in vivo that are too dynamic/small to be detected by normal confocal microscopy. Superresolution microscopy analysis of FLD, LD, and SDG26 subcellular localization will help to address this question. On the contrary, not all genes in the genome targeted by FCA for RNA processing also need the FLD/LD/SDG26containing complex for silencing (36). This agrees with our finding that FCA immunoprecipitation after cross-linking did not recover FLD, LD, or SDG26 (9). In addition, genetic data suggested that, even at the FLC locus, FCA could function in FLD-independent pathways to achieve some measure of silencing (Fig. 5 A and B) (11). A recent study showed that FCA interacts with CLF in vitro and in vivo, suggesting an FLDindependent role of FCA in regulating H3K27me3 directly (32). However, we have not detected this interaction in FCA on in vivo immunoprecipitation-mass spectrometry (IP-MS) (9), and it was not detected in CLF on in vivo IP-MS (37). An important question raised by this work is what is the active transcription module that FLD/LD/SDG26-containing complex inhibits. We were unable to find any histone methyltransferase activity in vitro for the FLD/LD/SDG26 complex (SI Appendix, Fig. S4), suggesting that additional components are required for the complex to exert its function. One tantalizing hypothesis is that the histone-modifying activity is tightly linked to the RNA polymerase II (Pol II) complex during transcription. Indeed, we detected Pol II subunits (e.g., NRPB1, NRPB2, and NRPB3) and factors involved in the regulation of transcription initiation and elongation (e.g., SPT5, SPT6, and SPT16) in the SDG26 CLNIP-MS list (Dataset S4). In addition, LD contains a PP1-AP-like domain shared with the transcription elongation factor TFIIS, suggesting a role for LD in transcriptional elongation (38). Further analysis of these possibilities will expand our understanding of how the RNA-binding protein FCA connects COOLAIR to antagonizing an active transcription module, thereby eventually leading to Polycomb silencing. Full dissection of this mechanism will reveal any further parallels between COOLAIR and Xist function, thus elaborating our evolutionary understanding of RNA-mediated chromatin silencing.
Materials and Methods
More detailed descriptions of the materials and methods used in this study are provided in the SI Appendix. A brief summary is provided here.
Flowering Time Analysis. The flowering time was determined essentially as described (9). Briefly, plants were grown in long-day conditions, and the total leaf number (TLN) produced before the initiation of flowering was counted to measure variation in flowering time.
RNA Analysis. RNA analysis was performed as described previously (9). Briefly, total RNA was extracted, treated with DNase, and reverse-transcribed by SuperScript IV Reverse Transcriptase (Invitrogen) using gene-specific reverse primers. Quantitative reverse transcription and PCR (qPCR) analysis was performed on a LightCycler480 II (Roche), and qPCR data were normalized to UBC. Primer pairs for amplifying unspliced FLC, spliced FLC, and UBC are listed in SI Appendix, Table S1.
Materials and Data Availability. Full lists of mass spectrometry are provided as Datasets S1-S5. All of the other raw data and materials that support the findings of this study are available from the corresponding authors upon reasonable request. Data are presented as the mean ± SD (n = 3). Asterisks indicate significant differences between the indicated plants (*P ≤ 0.0217, **P = 0.0043, ****P = 6.27105E-05, two-tailed t test). Each dot represents one biological replicate. (C) Flowering time of indicated plants (assayed as total leaf number, produced by the apical meristem before it switched to producing flowers) grown in a long-day photoperiod. Data are presented as the mean ± SD (n ≥ 10). Asterisks indicate significant differences between the indicated plants (****P ≤ 2.26769E-09, two-tailed t test). (D and E) Expression of spliced FLC (D) and unspliced FLC (E) relative to UBC in the indicated genotypes. Note that expression level in the mutant background was separately normalized to its corresponding wild-type background. Data are presented as the mean ± SD (n = 3). Asterisks indicate significant differences between the indicated plants (*P ≤ 0.0458, two-tailed t test). Each dot represents one biological replicate. | 2020-06-16T20:06:48.043Z | 2020-06-15T00:00:00.000 | {
"year": 2020,
"sha1": "e0bf9ab895e842f414aca189a1ebbcdcec7dc40a",
"oa_license": "CCBY",
"oa_url": "https://www.pnas.org/content/pnas/117/26/15316.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "bb050bffc4ff70e6bcd80338e82bc2791437cc38",
"s2fieldsofstudy": [],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
260333168 | pes2o/s2orc | v3-fos-license | Efficient Synthesis of Hydrolytically Degradable Block Copolymer Nanoparticles via Reverse Sequence Polymerization‐Induced Self‐Assembly in Aqueous Media
Abstract Hydrolytically degradable block copolymer nanoparticles are prepared via reverse sequence polymerization‐induced self‐assembly (PISA) in aqueous media. This efficient protocol involves the reversible addition‐fragmentation chain transfer (RAFT) polymerization of N,N′‐dimethylacrylamide (DMAC) using a monofunctional or bifunctional trithiocarbonate‐capped poly(ϵ‐caprolactone) (PCL) precursor. DMAC monomer is employed as a co‐solvent to solubilize the hydrophobic PCL chains. At an intermediate DMAC conversion of 20–60 %, the reaction mixture is diluted with water to 10–25 % w/w solids. The growing amphiphilic block copolymer chains undergo nucleation to form sterically‐stabilized PCL‐core nanoparticles with PDMAC coronas. 1H NMR studies confirm more than 99 % DMAC conversion while gel permeation chromatography (GPC) studies indicate well‐controlled RAFT polymerizations (M w/M n≤1.30). Transmission electron microscopy (TEM) and dynamic light scattering (DLS) indicate spheres of 20–120 nm diameter. As expected, hydrolytic degradation occurs within days at 37 °C in either acidic or alkaline solution. Degradation is also observed in phosphate‐buffered saline (PBS) (pH 7.4) at 37 °C. However, no degradation is detected over a three‐month period when these nanoparticles are stored at 20 °C in deionized water (pH 6.7). Finally, PDMAC30‐PCL16‐PDMAC30 nanoparticles are briefly evaluated as a dispersant for an agrochemical formulation based on a broad‐spectrum fungicide (azoxystrobin).
Introduction
[3][4][5][6][7] Conventional PISA involves growing an insoluble block from a soluble precursor block in a suitable solvent.At some critical degree of polymerization (DP), micellar nucleation occurs to form monomer-swollen nascent nanoparticles, which then act as the locus for the remaining polymerization to produce sterically-stabilized nanoparticles. [8,9]Traditionally, PISA has been conducted using vinyl monomers, which inevitably leads to nondegradable nanoparticles.This is an unfortunate limitation, because such nanoparticles offer a wide range of potential applications, including dispersants for agrochemicals, [10,11] emulsifiers for the production of Pickering nanoemulsions, [12][13][14] thermoresponsive biocompatible hydrogels for either cell culture or long-term stem cell storage, [15,16] and low-viscosity lubricants for automotive engine oils. [17]In each case, nanoparticle degradability would be a desirable "value-added" feature.
Given the strong global demand for more environmentally-friendly polymers, several attempts have been made to develop PISA routes to degradable nanoparticles.[21] Similarly, Roth and co-workers [22][23][24] (and more recently other research groups [25][26][27][28][29] ) reported the statistical copolymerization of dibenzo[c,e]oxepane-5-thione (DOT) with various vinyl monomers.Only a relatively low level of incorporation of such a cyclic comonomer is required to obtain oligomers after hydrolytic degradation.However, this approach suffers from some technical problems.First, such cyclic comonomers invariably require multi-step syntheses for which the overall yield is relatively low.Moreover, the radical ringopening copolymerization of CKAs with vinyl monomers often retards the overall rate of polymerization, which makes full comonomer conversion difficult to achieve.Finally, such cyclic comonomers are typically water-insoluble, which may complicate their use in aqueous formulations.However, significant progress has been recently made with regard to some of these problems. [21,26,30]In particular, RAFT aqueous emulsion copolymerization of DOT with styrene has been achieved with full comonomer conversion and a relatively narrow molecular weight distribution. [30]n view of the above problems, alternative approaches to degradable nanoparticles have been developed.Notably, Lecommandoux's group reported the synthesis of degradable polypeptide-based diblock copolymers via ring-opening polymerization of N-carboxyanhydrides (NCA) using a poly(ethylene glycol)-based (PEG) precursor. [31,32]Remarkably, such syntheses can be conducted directly in aqueous media.However, the synthesis of NCA monomers usually requires the use of highly toxic phosgene. [33]Nevertheless, this approach enabled the synthesis of degradable rod-like nanoparticles. [31,32]Similarly, Du and co-workers reported the preparation of well-defined diblock copolymer vesicles via NCA polymerization when using a monofunctional PEG precursor, albeit in THF rather than water [34] Recently, we reported a novel approach to PISA known as reverse sequence PISA. [35,36]This involved the RAFT aqueous dispersion polymerization of 2-hydroxypropyl methacrylate (HPMA) using an anionic water-soluble RAFT agent to produce charge-stabilized PHPMA latexes.
Subsequent chain extension of these relatively large precursor particles using a water-miscible methacrylic monomer leads to the formation of much smaller sterically-stabilized spherical nanoparticles.Such aqueous formulations are rather counter-intuitive because they involve synthesis of the hydrophobic block first.Herein we report a new type of reverse sequence PISA formulation that provides convenient access to hydrolytically degradable nanoparticles via an aqueous protocol.This involves initial solubilization of a hydrophobic trithiocarbonate-capped poly(ɛ-caprolactone) (PCL) precursor with the aid of a suitable water-miscible monomer (N,N'-dimethylacrylamide, DMAC), see Scheme 1.The DMAC initially serves as a co-solvent to ensure dissolution of the otherwise water-insoluble PCL.Subsequently, RAFT polymerization of DMAC is conducted at 80 °C either in concentrated aqueous solution or in the bulk.Once a sufficiently long PDMAC block has been grown, the homogeneous reaction mixture is diluted with water to 10-25 % w/w solids.As DMAC monomer is consumed, the growing amphiphilic block copolymer chains undergo nucleation to form sterically-stabilized PCL-core spherical nanoparticles.These nanoparticles are character-Scheme 1. Synthesis of a bifunctional trithiocarbonate-capped RAFT agent (TTC-PCL 16 -TTC) via DCC/DMAP-catalyzed esterification of a dihydroxycapped PCL precursor using a carboxylic acid-functionalized RAFT agent (CEPA).Subsequently, PDMAC x -PCL 16 -PDMAC x nanoparticles are prepared at 80 °C via reverse sequence PISA.Initially, the RAFT polymerization of DMAC is conducted either in the bulk or at 80 % w/w solids, with subsequent dilution to 10-25 % w/w solids using deoxygenated deionized water at a suitable intermediate DMAC conversion.Conditions: [TTC]/[ACVA] molar ratio = 5.
ized in terms of their morphology and size by transmission electron microscopy (TEM) and dynamic light scattering (DLS) respectively.Moreover, their long-term hydrolytic degradation in aqueous solution is studied under various conditions.Finally, selected nanoparticles were evaluated as a putative dispersant for the formulation of a well-known agrochemical compound (azoxystrobin, a broad-spectrum fungicide).
Results and Discussion
7,38] In principle, this approach could form part of the solution to the global challenge of plastic waste. [39,40]Accordingly, it has been evaluated for solution polymerization, [22,37] conventional aqueous emulsion polymerization [29,41,42] and for RAFT-mediated PISA. [19,20,30]evertheless, this strategy currently suffers from several potential disadvantages, as discussed above.Herein, we propose an alternative route to hydrolytically degradable block copolymer nanoparticles based on a new reverse sequence PISA formulation, as summarized in Scheme 1.
Both monohydroxy-capped (Scheme S1) and dihydroxycapped PCL precursors were derivatized using a carboxylic acid-functionalized RAFT agent (CEPA) via esterification catalyzed by N,N'-dicyclohexylcarbodiimide (DCC) and 4-(dimethylamino)pyridine (DMAP).[45][46] In principle, this should aid solubilization of the hydrophobic PCL precursor when using DMAC monomer as a co-solvent.Indeed, the TTC-PCL 16 -TTC precursor (where TTC denotes the trithiocarbonate end-groups) is fully soluble in this monomer at 20 °C when 40 or more molar equivalents of DMAC are added (see Figure S1).However, for syntheses conducted using an 80 % w/w aqueous solution of DMAC (see Scheme 1), the TTC-PCL 16 -TTC precursor is insoluble at 20 °C and only becomes fully dissolved on heating up to the polymerization temperature of 80 °C, see Figure S2.Similar solubility behavior was observed for the PCL 42 -TTC precursor (see Figures S3 and S4).
1 H NMR spectroscopy was used to determine a mean degree of esterification of 98 � 1 % for this bifunctional precursor by comparing the integrated proton signal at 1.91 ppm assigned to the methyl group of the RAFT agent with the unique PCL backbone signals at 4.07, 2.33 and 1.66 ppm (see Figure 1).This technique was also used to determine the mean degree of polymerization of each of the three monohydroxy-capped PCL precursors and to confirm the mean degree of polymerization of the as-received dihydroxy-capped PCL precursor (see Figures S5-S8).Finally, high degrees of esterification were confirmed for the corresponding three monofunctional PCL 21 -TTC (98 � 3 %), PCL 29 -TTC (96 � 2 %) and PCL 42 -TTC (100 � 1 %) precursors, see Figures S9-S12.Furthermore, gel permeation chromatography (GPC) analysis using a UV detector indicated that no residual CEPA RAFT agent remained after purification (see Figure S13).
Initial DMAC polymerizations were conducted in the bulk to ensure complete dissolution of the relevant hydrophobic PCL precursor.Subsequently, it was discovered that homogeneous reaction solutions could also be obtained in the presence of a small amount of water (see Scheme 1).Once sufficient DMAC had been polymerized, a significant increase in the solution viscosity was observed.This visual cue was used to indicate when to add the deoxygenated deionized water to produce a more dilute reaction mixture.This dilution step produced an aqueous dispersion of PCLcore nanoparticles.Currently, it is not known whether any nascent nanoparticles are formed prior to dilution.Analytical techniques such as DLS or TEM require substantial dilution of the reaction mixture: this would inevitably induce nanoparticle formation (if it had not already occurred) because the solvency for the PCL block is reduced.In principle, time-resolved small-angle X-ray scattering (SAXS) could determine whether nucleation had occurred prior to dilution of the reaction mixture.However, such experiments require access to a synchrotron X-ray source and are beyond the scope of the present study.
A kinetic study was conducted when targeting a PDMAC DP of 80 using a TTC-PCL 16 -TTC precursor at 80 °C.After 7.5 min, the initial bulk polymerization was diluted with deoxygenated water to produce a 10 % w/w aqueous dispersion of nascent sterically-stabilized triblock copolymer nanoparticles (Figure 2a).The reaction mixture was periodically sampled and the resulting aliquots were analyzed by 1 H NMR spectroscopy.On addition of water
Angewandte Chemie
Research Articles after 7.5 min, the instantaneous DMAC conversion was estimated to be 25 % and a final monomer conversion of 98 % was achieved after 50 min at 80 °C.Interestingly, dilution of the reaction mixture from the bulk to 10 % w/w solids did not result in any discernible reduction in the rate of polymerization.This is presumably because the polymerization of acrylamides proceeds much faster in dilute aqueous solution than in the bulk. [47,48]For example, Büback and co-workers reported a nine-fold increase in the propagation rate constant (k p ) for the free radical polymerization of DMAC in 20 % aqueous solution at 80 °C compared to the corresponding bulk polymerization at the same temperature. [48]nlike conventional aqueous PISA formulations, no dramatic increase in the rate of polymerization is observed after micellar nucleation. [9]This is simply because the growing PDMAC steric stabilizer chains are located on the outside of the nanoparticles, so there is no tangible benefit if the nanoparticle cores become swollen with unreacted DMAC monomer.GPC analysis of this reverse sequence PISA formulation indicated a linear evolution in molecular weight for the triblock copolymer chains with conversion.Moreover, a very high blocking efficiency was observed and the molecular weight distribution remained relatively narrow (M w /M n < 1.30), indicating a well-controlled RAFT polymerization (see Figure 2b and 2c).
Having confirmed the efficient synthesis of well-defined block copolymers by this new reverse sequence PISA route, a library of PDMAC x -PCL 16 -PDMAC x triblock copolymers and PCL y -PDMAC z diblock copolymers was prepared by systematically varying the target DP of each block (see Table S1).Representative GPC traces are shown in Fig-
Angewandte Chemie
Research Articles ure 3a when using a bifunctional TTC-PCL 16 -TTC precursor to target a range of PDMAC DPs for initial bulk polymerizations conducted at 80 °C; each reaction mixture was then diluted to 10 % w/w solids using deoxygenated deionized water within 10 min.In all cases, aqueous colloidal dispersions of block copolymer nanoparticles were obtained.High blocking efficiencies were observed when using either a bifunctional TTC-PCL 16 -TTC precursor or a monofunctional PCL 21 -TTC precursor, as indicated by UV GPC analysis at λ = 305 nm (see Figure S14).However, UV GPC analysis revealed tailing towards low molecular weight when using either PCL 29 -TTC or PCL 42 -TTC, which suggests a small fraction of dead chains.
Furthermore, at least 99% DMAC conversion was obtained after 16 h as judged by 1 H NMR spectroscopy.Similarly, a series of monofunctional PCL y -TTC precursors were employed to target a range of PDMAC DPs using the same synthetic protocol (see Figure 3b-d).Again, essentially full DMAC conversion was achieved in each case within 16 h.Furthermore, such syntheses could be also conducted at 80 % w/w solids, where a range of PDMAC DPs were targeted using a bifunctional TTC-PCL 16 -TTC or a PCL y -TTC precursor (see Table S2).The resulting GPC data were consistent with that produced from polymerizations initiated in the bulk (see Figure S15 and S16).Kinetic analysis of DMAC polymerizations performed in 80 % w/w aqueous solution (followed by dilution to 10 % w/w after 7 min) indicated that good control could also be achieved when a small amount of water was present at the beginning of the polymerization (see Figure 4).
TEM analysis of the resulting PDMAC 30 -PCL 16 -PDMAC 30 nanoparticles confirmed a spherical morphology (see Figure 5a) with an estimated number-average diameter of 16 � 3 nm (based on digital image analysis of at least 100 nanoparticles), while DLS studies indicated a hydrodynamic z-average diameter of approximately 21 nm (Figure 5b).
Aqueous electrophoresis studies revealed zeta potentials close to zero over a wide pH range, as expected given the non-ionic nature of the PDMAC steric stabilizer chains (Figure 5c).TEM analysis was also performed on the PCL y -PDMAC z diblock copolymer nanoparticles.Spherical nanoparticles were observed in all three cases, with estimated number-average diameters of 33 � 5 nm, 40 � 5 nm and 48 � 8 nm (based on digital image analysis of at least 100 nanoparticles in each case), see Figure 6.DLS studies indicated a z-average diameter of approximately 52 nm for the PCL 21 -PDMAC 80 nanoparticles and 68 nm for the PCL 29 -PDMAC 100 nanoparticles (see Figures 6a and 6b).In contrast, colloidal aggregates of around 112 nm diameter were observed for PCL 42 -PDMAC 120 , indicating weak flocculation of the primary nanoparticles in this case (see Figure 6c).DLS studies performed on several series of PDMAC x -PCL 16 -PDMAC x and PCL y -PDMAC z copolymers produced consistent data when varying the target PDMAC DP (see Figures S17-S22).After demonstrating the aqueous synthesis of PCL-core nanoparticles at 10 % w/w solids, higher final nanoparticle concentrations were investigated.For example, a bifunctional TTC-PCL 16 -TTC precursor was used to target a PDMAC DP of 80.The initial bulk polymerization was diluted to 15-25 % w/w solids using deoxygenated deionized water (see Table S3).GPC data indicated that varying the final nanoparticle concentration had no discernible effect on the nature of the copolymer chains (see Figure S23).However, increasing the nanoparticle concentration had a significant effect on the physical appearance of the final colloidal dispersion (see Figure S24).Thus, a freeflowing dispersion was obtained at 10 % w/w, a highly viscous fluid at 15 % w/w, and a free-standing gel was obtained at either 20 % w/w or 25 % w/w.The gels were analyzed via shear-induced polarized light imaging (SIPLI) to determine if gelation was due to the presence of wormlike nanoparticles. [49]However, no characteristic Maltese cross (indicating the presence of anisotropic particles) was observed for any of these gels, suggesting that gelation is
Angewandte
Chemie instead due to close-packed spherical micelles. [50]This was confirmed by diluting the 20 % w/w and 25 % w/w gels to 10 % w/w.As expected, degelation occurred on dilution below the minimum copolymer concentration required for spherical micelle gels.Moreover, DLS studies indicate a monotonic increase in the z-average diameter and a gradual broadening of the particle size distribution on raising the nanoparticle concentration from 10 % w/w to 25 % w/w solids.
Hydrolytic Degradation of Block Copolymer Nanoparticles
[53] For comparison, degradation studies were also conducted at either pH 10.8 or pH 2.9 at the same temperature.For the PDMAC 50 -PCL 16 -PDMAC 50 nanoparticles, hydrolytic degradation was always observed at 37 °C regardless of the solution pH.Random scission of ester bonds within the PCL chains led to an approximate halving of the molecular weight.Subsequently, full degradation of the PCL block produced low molecular weight water-soluble PDMAC chains.As expected, the fastest rate of ester hydrolysis was observed in alkaline solution (Figure 7).It is perhaps worth emphasizing that the hydrolysis conditions employed herein are much milder than those typically reported in the literature for hydrolytically degradable copolymers prepared via statistical copolymerization of cyclic monomers (e.g.CKAs or DOT) with vinyl monomers. [19,22,28] (POEGMA-b-P(LMA-co-MPDL)) nanoparticles. [21]A 10 % w/w aqueous dispersion of the same batch of PDMAC 50 -PCL 16 -PDMAC 50 nanoparticles remained colloidally stable after aging for 12 weeks at 20 °C (Figure 8).Indeed, a slightly narrower intensity-average particle size distribution was observed (see Figure S25).
Finally, degradation studies were also performed on PCL 21 -PDMAC 70 and PCL 42 -PDMAC 120 diblock copolymer nanoparticles in basic, acidic or neutral (PBS; pH 7.4) solution (see Figures S26 and S27).Like the analogous triblock copolymer nanoparticles, hydrolytic degradation occurred in all cases.Perhaps surprisingly, complete degradation of the PCL cores was observed after four weeks in PBS solution at 37 °C, whereas only partial degradation was observed for the triblock copolymer nanoparticles within the same time scale.As with the triblock copolymer, both types of diblock copolymer chains remained intact as judged by GPC analysis when stored as nanoparticle dispersions in deionized water at 20 °C (see Figure S26 and S27).
Use of PDMAC 30 -PCL 16 -PDMAC 30 Nanoparticles in an Agrochemical Formulation
The hydrolytically degradable nature of these new PCLbased nanoparticles is highly desirable for various commercial applications.For example, we recently reported that block copolymer nanoparticles prepared using a conventional aqueous PISA formulation can be used as a dispersant to prepare a concentrated aqueous suspension of a broad spectrum fungicide (azoxystrobin) that is widely used within the agrochemical sector.More specifically, wet ball-milling of � 76 μm azoxystrobin crystals in the presence of welldefined poly(glycerol monomethacrylate)-poly(methyl methacrylate) diblock copolymer nanoparticles of 29 nm diameter led to the formation of a 20 % w/w aqueous suspension of azoxystrobin microparticles of approximately 2 μm diameter. [11]Electron microscopy studies confirmed that the surface of these microparticles was uniformly coated with the nanoparticles, which conferred long-term stability. [11]However, such nanoparticles possess an all- S2).(b) DLS particle size distribution recorded for the same PDMAC 30 -PCL 16 -PDMAC 30 nanoparticles (inset shows the physical appearance of this aqueous dispersion at 10 % w/w solids).(c) Zeta potential vs. pH curve for a 0.1 % w/w aqueous dispersion of PDMAC 30 -PCL 16 -PDMAC 30 nanoparticles.
Angewandte Chemie
Research Articles methacrylic backbone so they are classified as a nondegradable nanoplastic under new environmental legislation.Unfortunately, this precludes their use for this agrochemical application and is also problematic for other potential applications in personal care and cosmetics formulations.Nevertheless, in a follow-up study we established the S1).Representative DLS particle size distributions recorded for the same three aqueous dispersions.
Research Articles
fundamental design rules for using nanoparticle dispersants in the context of various agrochemical compounds (including five fungicides and a pesticide). [10,11]Bearing the latter results in mind, we decided to evaluate the new hydrolytically degradable PDMAC 30 -PCL 16 -PDMAC 30 nanoparticles as a dispersant for azoxystrobin.
Wet ball-milling of coarse azoxystrobin crystals in the presence of PDMAC 30 -PCL 16 -PDMAC 30 nanoparticles produced a 20 % w/w aqueous suspension within 30 min at 20 °C.Optical microscopy studies confirmed a substantial reduction in mean particle size (Figures 9a and 9b) while laser diffraction studies indicated that the final azoxystrobin microparticles had a mean diameter of 2.0 μm (Figure 9c).These observations are very similar to those obtained when using non-degradable methacrylic nanoparticles as a dispersant. [10,11]In principle, such hydrolytically degradable nanoparticles could be used to prepare more environmentally-friendly next-generation agrochemical formulations.
Conclusion
A new strategy for reverse sequence PISA has been developed that enables the efficient synthesis of hydrolytically degradable block copolymer nanoparticles in aqueous media.This approach involves solubilization of a trithiocarbonate-capped monofunctional or bifunctional poly(ɛ-caprolactone) precursor in DMAC.RAFT polymerization of this monomer is then conducted either in the bulk or as an 80 % w/w aqueous solution.At a suitable intermediate conversion, the reaction mixture is diluted by addition of deoxygenated deionized water.Thereafter, the DMAC polymerization proceeds to essentially full conversion within 16 h at 80 °C, producing a 10-25 % w/w aqueous dispersion of sterically-stabilized PCL-core nanoparticles.High blocking efficiencies and narrow molecular weight distributions (M w /M n � 1.30) are obtained, which indicates that the DMAC polymerization is well-controlled.TEM analysis revealed a spherical copolymer morphology regardless of the copolymer composition or architecture, while DLS studies indicated apparent z-average diameters ranging from 20 to 120 nm.Aging 1.0 % w/w aqueous dispersions of PDMAC 50 -PCL 16 -PDMAC 50 nanoparticles at 37 °C led to extensive hydrolytic degradation at either pH 2.9 or pH 10.8.A slower rate of degradation was also observed under milder conditions (pH 7.4), whereas no discernible hydrolysisoccurred when aging the same nanoparticles for 12 weeks in deionized water (10 % w/w solids, pH 6.7) at 20 °C.Finally, PDMAC 30 -PCL 16 -PDMAC 30 nanoparticles were evaluated as dispersants for the preparation of concentrated aqueous suspensions of a broad-spectrum fungicide (azoxystrobin) via wet ball-milling.This processing route produced azoxystrobin microparticles of approximately 2 μm diameter.Given the very high monomer conversions, narrow molecular weight distributions and aqueous formulations, reverse sequence PISA offers a highly convenient route to hydrolytically degradable nanoparticles.In this context, it represents an interesting alternative strategy to the statistical copolymerization of cyclic monomers with vinyl monomers, as recently reported by other research groups.
Figure 2 .
Figure 2. (a) Conversion vs. time curve (black points) obtained by 1 H NMR analysis for the reverse sequence PISA synthesis of PDMAC 80 -PCL 16 -PDMAC 80 nanoparticles at 80 °C.Initially, the RAFT polymerization of DMAC was conducted in the bulk with subsequent dilution to 10 % w/w solids using deoxygenated deionized water after 7.5 min (or � 25 % DMAC conversion).Conditions: [TTC]/[ACVA] molar ratio = 5.0.(b) Selected DMF GPC curves (refractive index detector) and (c) the corresponding M n (blue points) and M w /M n (red points) data determined during this reverse sequence PISA synthesis.
Figure 3 .
Figure 3. DMF GPC curves (refractive index detector) recorded for a series of block copolymers prepared by reverse sequence PISA in aqueous media using an ACVA initiator at 80 °C.(a) Bifunctional TTC-PCL 16 -TTC precursor and a corresponding series of PDMAC x -PCL 16 -PDMAC x triblock copolymers.(b) Monofunctional PCL 21 -TTC precursor and a corresponding series of PCL 21 -PDMAC z diblock copolymers.(c) Monofunctional PCL 29 -TTC precursor and a corresponding series of PCL 29 -PDMAC z diblock copolymers.(d) Monofunctional PCL 42 -TTC precursor and a corresponding series of PCL 42 -PDMAC z diblock copolymers.
Figure 4 .
Figure 4. (a) Conversion vs. time curve (black points) obtained by 1 H NMR spectroscopy for the reverse sequence PISA synthesis of PDMAC 80 -PCL 16 -PDMAC 80 nanoparticles prepared at 80 °C.Initially, the RAFT polymerization of DMAC was conducted at 80 % w/w solids with subsequent dilution to 10 % w/w solids using deoxygenated deionized water after 7 min (which corresponds to � 31 % DMAC conversion).Conditions: [TTC]/[ACVA] molar ratio = 5.0.(b) The corresponding M n (blue points) and M w /M n (red points) data determined via DMF GPC analysis (refractive index detector).
Figure 9 .
Figure 9. Optical microscopy images recorded for azoxystrobin crystals (a) before and (b) after wet ball-milling in the presence of PDMAC 30 -PCL 16 -PDMAC 30 nanoparticles (D z = 21 nm diameter).(c) Laser diffraction particle size distributions recorded for the initial coarse azoxystrobin crystals and the much finer nanoparticle-coated azoxystrobin microparticles obtained after wet ball-milling for 30 min at 20 °C. | 2023-08-01T06:16:34.294Z | 2023-07-31T00:00:00.000 | {
"year": 2023,
"sha1": "b38f59de6e41df1d0727f263ea6344f1abae7ca4",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1002/anie.202309526",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "303c4de3b1c89c505fd87b7caefa2fc0673ca0c6",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
259183975 | pes2o/s2orc | v3-fos-license | The ubiquitin-like protein FAT10 in hepatocellular carcinoma cells limits the efficacy of anti-VEGF therapy
Graphical abstract
Introduction
Angiogenesis is essential to solid tumor growth and development and is considered a therapeutic target [1][2][3].As an important proangiogenic signaling factor, vascular endothelial growth factor (VEGF) plays a critical role in stimulating tumor growth by promoting angiogenesis within tumor tissues [4].Thus, the application of anti-VEGF therapy to inhibit tumor angiogenesis has become an important strategy for the clinical treatment of cancer [5,6].However, the clinical application of anti-VEGF therapy in cancer treatment is limited owing to its poor efficacy [7].The key factors involved in limiting the efficacy of anti-VEGF therapy and the underlying molecular mechanisms are complex [8][9][10].Notably, anti-VEGF therapy can increase the expression of multiple non-VEGF proangiogenic factors, such as basic fibroblast growth factor (bFGF), transforming growth factor (TGF)-b, angiopoietin (ANG), and platelet-derived growth factor (PDGF), which limits the efficacy of anti-VEGF therapy [11][12][13][14].However, the key factors controlling the simultaneous upregulation of these non-VEGF proangiogenic factors that limit the efficacy of anti-VEGF therapy and the underlying mechanisms have not yet been reported.
Human leukocyte antigen F locus-adjacent transcript 10 (FAT10) is a ubiquitin (Ub)-like protein that has recently been implicated to play important roles in the development of various tumors [15].FAT10 expression is upregulated upon stimulation with hypoxia or pro-inflammatory cytokines in various cell types [16][17][18].Moreover, we and others have reported that increased FAT10 expression can enhance the transcriptional activity of the WNT/b-catenin or nuclear factor kappa B (NF-jB) signaling pathway in various cell types [19,20].These pathways stimulate angiogenesis in tumor tissues by upregulating the expression of various proangiogenic factors in tumor cells [21,22].In addition, studies have demonstrated that FAT10 directly mediates the Ubindependent proteasomal degradation of substrates [23].Interestingly, our previous studies have confirmed that FAT10 can directly antagonize the ubiquitination of substrates and consequently stabilize these substrate proteins in different cells [17,19,24,25].Furthermore, our study demonstrated that FAT10 could simultaneously exert both degradative and stabilizing effects in hepatocellular carcinoma (HCC) cells [26].However, as a unique Ub-like protein, whether FAT10 in tumor cells is involved in limiting the efficacy of anti-VEGF therapy remains unclear.
In this study, we used human HCC cells to explore the effects and mechanisms of FAT10 in limiting the efficacy of anti-VEGF therapy.
Cell culture
Human HCC cell lines, HCCLM3 and SMMC-7721, human umbilical vein endothelial cells (HUVECs) and human embryonic kidney cell line, HEK293T, were obtained from Shanghai Cell Bank, Type Culture Collection Committee of the Chinese Academy of Sciences (Shanghai, China) and authenticated by short tandem repeat profiling at the Cell Bank.Cells were cultured in DMEM, MEM or L-15 medium (Gibco, Grand Island, NY, USA) supplemented with 10% fetal bovine serum (Gibco, Grand Island, NY, USA) at 37 °C in 5% CO 2 .
In vivo ubiquitination assay
Cells were exposed to 15 mmol/L MG132 for 8 h and lysates immunoprecipitated with anti-HIF1a, anti-b-catenin, anti-STAT3 or anti-TAB3 antibodies.Ubiquitination of HIF1a, b-catenin, STAT3 and TAB3 was detected by anti-Ub antibody.
Animal studies
A subcutaneous tumorigenesis model of human HCC cells was generated in male athymic nude mice (BALB/c-nu/nu, 6-8 weeks old, Shanghai SLAC Laboratory Animal Co., Ltd.) by subcutaneous injection of 1 Â 10 7 cells in 100 lL PBS into the flank.Tumor volume was measured twice a week and calculated as follows: 1/2 (largest diameter) Â (smallest diameter) 2 .The mice were randomly divided into two groups: negative control group (immunoglobulin G [IgG], 10 mg/kg intraperitoneal injection, twice weekly, n = 10) and BV group (10 mg/kg intraperitoneal injection, twice weekly, n = 10) or BV group (n = 10) and BV plus aspirin (oral administration, 5 mg/L) group (n = 10).Treatment was initiated when the tumor size reached approximately 100 mm 3 .For the survival experiments, treatment was continued until the mice died.At the end of the experiment, the mice were sacrificed, and the tumor weight and volume were recorded.Tumor specimens were stored in liquid nitrogen or fixed in formalin for further analysis.Data are presented as mean ± standard deviation (SD).
An orthotopic transplanted liver cancer model was created by subcutaneous injection of FAT10 +/+ HCCLM3, FAT10 À/À HCCLM3 cells, FAT10 +/+ SMMC-7721 and FAT10 À/À SMMC-7721 cells stably expressing the firefly luciferase gene into the flanks of male BALB/ c-nu/nu mice, 6-8 weeks old.When subcutaneous tumors grow to about 1 cm in diameter, they were removed from subcutaneous and cut into pieces with a volume of approximately 1 mm 3 under aseptic condition, and then the pieces were implanted into the livers of nude mice.Mice were anesthetized with isoflurane 10 days later and bioluminescent imaging performed in a Lumina Series III IVIS (In Vivo Imaging System) instrument (PerkinElmer, MA, USA) and B-ultrasound measurements made in a Vevo 2100 Imaging System (FUJIFILM VisualSonics, Toronto, Canada).Tumor volume was calculated as: 1/2 (anteroposterior diameter) Â (transverse diameter) Â (axial diameter).Mice were treated with intraperitoneal injection of 10 mg/kg BV twice weekly for ten days and tumor size reassessed by bioluminescent imaging and Bultrasound.Data are presented as mean ± SD.At the end of the experiment, mice were sacrificed and tumor specimens stored in liquid nitrogen or fixed in formalin for further analysis.
Immunohistochemistry (IHC)
Sections of xenografted tumor tissues were treated with xylene and graded alcohol, and then subjected to antigen retrieval in 0.01 M citrate buffer.Hydrogen peroxide was used for blockage.
The sections were incubated with goat serum for 30 min and then with primary antibody overnight at 4 °C.A 2-step immunohistochemical method (catalog no.: PV-9000; ZSGB-BIO Co., Ltd., Beijing, China) was adopted for immunostaining.The staining intensity and percentage of positive cells were scored semi-quantitatively by 3 pathologists who were blind to the clinical parameters.
RNA-seq
Total RNA was isolated using Trizol reagent (Invitrogen Life Technologies) and three micrograms used for sample preparation.In brief, mRNA was purified from total RNA using poly-T oligoattached magnetic beads, first-strand cDNA synthesized using random oligonucleotides and SuperScript II and second-strand cDNA with DNA Polymerase I and RNaseH.Remaining overhangs were converted into blunt ends via exonuclease/polymerase activities, enzymes removed, 3 0 ends adenylated and Illumina PE adapter oligonucleotides ligated for hybridization.To select cDNA fragments of the preferred 200 bp in length, the library fragments were purified using the AMPure XP system (Beckman Coulter, Beverly, CA, USA).DNA fragments with ligated adaptor molecules on both ends were selectively enriched using Illumina PCR Primer Cocktail in a 15-cycle PCR.Products were purified (AMPure XP system) and quantified using the Agilent high-sensitivity DNA assay on a Bioanalyzer 2100 system (Agilent).The sequencing library was then sequenced on a Hiseq platform (Illumina).
Micro-CT angiography
Balb/c nude mice received a 50 ml intraperitoneal injection of heparin 10 min before being sacrificed by inhalation of carbon dioxide.Contrast medium was injected through the ascending aorta.The tumors were then imaged with a micro-CT scanner system (PINGSENG, NEMO Micro CT).The tumors were imaged as the background media.The micro-CT images were generated by operating the X-ray tube at an energy level of 60 kV and a current of 0.2 mA.The vascular network and tumor were extracted by a series of image processing steps.An intensity threshold of 1,500 Houndsfield units and morphological filtering (erosion and dilation) were applied to the volumetric micro-CT image data to extract the VV.The TV was extracted from the background in a similar fashion, with an intensity threshold of 58 Houndsfield units.Vessel density (VV/TV) was determined from the ratio of VV to TV.The vascular and tumor intensity thresholds were determined by visual inspection of the segmentation results from a subset of samples.
Ethics statement
All experiments involving animals were conducted according to the ethical policies and procedures approved by the Ethics Committee of the Nanchang University (Approval no.NCUFll-20160522).
Statistical analysis
Statistical analysis was performed using GraphPad Prism software.Differences between two groups were analyzed by Student's t test and among more than two groups by one-way analysis of variance (ANOVA).A value of p < 0.05 was considered statistically significant.
Other materials and methods are provided in the supplementary materials and methods.
Simultaneous upregulation of multiple FAT10-dependent non-VEGF factors in HCC cells limits the antitumor effects of anti-VEGF therapy
To investigate whether FAT10 expression in HCC cells is involved in limiting the efficacy of anti-VEGF therapy, we used clustered regularly interspaced short palindromic repeats (CRISPR)-CRISPR-associated protein 9 technology to knock out FAT10 in HCCLM3 and SMMC-7721 cells, generating FAT10 À/À HCCLM3 and FAT10 À/À SMMC-7721 cells.Subcutaneous tumors were seeded in nude mice from FAT10 +/+ and FAT10 À/À HCC cells and treated with BV, a monoclonal antibody that neutralizes VEGF, or with IgG as a control.Neither the growth rate nor tumor weight differed significantly between mice inoculated with FAT10 +/+ HCC cells and those treated with BV and untreated mice (Fig. 1A-B and supplementary Fig. 1A-B).Furthermore, life-spans of FAT10 +/+ HCC cell-inoculated mice did not differ, regardless of the treatment type (Fig. 1C and supplementary Fig. 1C).However, mice inoculated with FAT10 À/À HCC cells and treated with BV showed a slower growth rate and lower tumor weight than control mice (Fig. 1D-E and supplementary Fig. 1D-E).In addition, the lifespan of mice treated with BV was significantly longer compared with that of control IgG-treated mice in the FAT10 À/À HCC cell models (Fig. 1F and supplementary Fig. 1F).Thus, anti-VEGF therapy exerted significant antitumor effects in the FAT10 À/À HCC cell models but not in the FAT10 +/+ HCC cell models.
Next, RNA-sequencing (RNA-seq) analysis using HCCLM3derived tumor tissues showed that FAT10 activation was abnormally higher, along with enhanced activation of hypoxia-, inflammation-, and multiple proangiogenesis-related genes, such as genes encoding VEGF, bFGF, TGF-b, ANG2, and PDGF, in tumor tissues from BV-treated mice compared with those from untreated mice; however, RNA-seq showed that hypoxia-and inflammationrelated genes, as well as the gene encoding VEGF, were also induced in FAT10 À/À HCCLM3 tumor tissues of mice treated with BV compared with those from untreated mice, whereas the expression levels of other proangiogenic factors, including bFGF, TGF-b, ANG2, and PDGF, did not differ significantly between the BVtreated and BV-untreated groups (Fig. 1G).Quantitative reverse transcription-polymerase chain reaction (qRT-PCR) analysis showed higher mRNA levels of FAT10, VEGF, bFGF, TGF-b, ANG2, and PDGF in FAT10 +/+ HCC tumors from BV-treated mice compared with untreated (Fig. 1H and supplementary Fig. 1G).Although BV treatment significantly increased the mRNA level of VEGF in FAT10 À/À HCC cells, no significant changes were observed in the mRNA levels of bFGF, TGF-b, ANG2, and PDGF in FAT10 À/À HCC cells, regardless of the treatment type (Fig. 1I and supplementary Fig. 1H).These results suggested that BV treatment-aggravated hypoxia and inflammation in tumor tissues can mediate the simultaneous upregulation of multiple FAT10-dependent non-VEGF proangiogenic factors, including bFGF, TGF-b, ANG2, and PDGF, in HCC cells.
Furthermore, we investigated the relationships between the expression of FAT10 and that of VEGF, bFGF, TGF-b, ANG2, and PDGF induced by inflammation or hypoxia in HCC cells.qRT-PCR analysis showed increased FAT10, VEGF, bFGF, TGF-b, ANG2, and PDGF expression when FAT10 +/+ HCC cells were exposed to tumor necrosis factor (TNF)-a and interferon (IFN)-c or hypoxia (Fig. 1J-K and supplementary Fig. 1I-J).However, VEGF, bFGF, TGF-b, ANG2, and PDGF mRNA did not change with TNF-a/IFN-c treatment of FAT10 À/À HCC cells (Fig. 1L and supplementary Fig. 1K).Unexpectedly, hypoxia increased VEGF mRNA but not that of bFGF, TGF-b, ANG2, and PDGF in FAT10 À/À HCC cells (Fig. 1M and supplementary Fig. 1L).Thus, these results indicated that the simultaneous upregulation of multiple non-VEGF proangiogenic factors in HCC cells, including bFGF, TGF-b, ANG2, and PDGF, depends on FAT10, regardless of induction by pro-inflammatory cytokines or hypoxia.In summary, our results suggested that after BV treatment, the aggravation of hypoxia and inflammation in tumor tissues could mediate the simultaneous upregulation of multiple FAT10dependent non-VEGF proangiogenic factors, including bFGF, TGFb, ANG2, and PDGF, in HCC cells, leading to the limited antitumor effects of anti-VEGF therapy.
Simultaneous upregulation of multiple FAT10-dependent non-VEGF signals secreted by HCC cells can promote VEGF-independent angiogenesis
Next, we sought to explore whether the simultaneous upregulation of multiple FAT10-dependent non-VEGF proangiogenic signals, including bFGF, TGF-b, ANG2, and PDGF, which are secreted by HCC cells, can promote VEGF-independent angiogenesis.First, our in vitro data showed that when HCC cell-secreted VEGF was neutralized, FAT10 overexpression upregulated multiple non-VEGF proangiogenic signals, including bFGF, TGF-b, ANG2, and PDGF, which are secreted by HCC cells to enhance the proliferation and tube formation ability of human umbilical vein endothelial cells (HUVECs).HCCLM3 and SMMC-7721 cell lines stably overexpressing FAT10 (Flag-FAT10) were created (supplementary Fig. 2A-B).Subsequently, anti-VEGF antibodies were added to the culture medium of Flag-FAT10 HCC and control cells, respectively.Higher expression of FAT10, VEGF, bFGF, TGF-b, ANG2 and PDGF mRNA was shown by qRT-PCR in Flag-FAT10 cells than in controls (Fig. 2A and supplementary Fig. 2C).The enzyme-linked immunosorbent assay (ELISA) results revealed that the protein concentrations of bFGF, TGF-b, ANG2, and PDGF were significantly higher in Flag-FAT10 HCC cells than in control cells and that the protein concentration of VEGF in the supernatants of the two groups did not differ (Fig. 2B and supplementary Fig. 2D).Flag-FAT10 HCC cell conditioned medium was used to culture HUVECs and enhanced tube formation and proliferation relative to controls shown by immunofluorescence and real-time cell analysis (Fig. 2C-E and supplementary Fig. 2E-G).
Subsequently, in vivo data found that the simultaneous upregulation of FAT10-mediated non-VEGF proangiogenic signals, bFGF, TGF-b, ANG2 and PDGF, in HCC cells promoted angiogenesis when VEGF secretion was blocked by BV.FAT10 +/+ HCC cells were used to establish orthotopic liver xenograft models in nude mice treated with BV (Fig. 2F).Evaluation of tumors by in vivo fluorescence imaging system (IVIS) and B-ultrasound (B-US) after 10 days of BV treatment revealed greater tumor volume in treated mice (Fig. 2G-H and supplementary Fig. 2H-I).Levels of FAT10, VEGF, bFGF, TGF-b, ANG2 and PDGF mRNA significantly increased after BV treatment in FAT10 +/+ HCC cells (Fig. 2I and supplementary Fig. 2J).VEGF protein concentration decreased in FAT10 +/+ tumor tissues whereas those of bFGF, TGF-b, ANG2 and PDGF increased after BV treatment (Fig. 2J and supplementary Fig. 2K).Immunohistochemical (IHC) analysis indicated increases in hypoxia, macrophage infiltration and vascular density in FAT10 +/+ HCCderived tumor tissues and FAT10 expression increased in HCC cells after BV treatment (Fig. 2K and supplementary Fig. 2L).
Furthermore, in vivo results confirmed that the simultaneous upregulation of non-VEGF proangiogenic signals, bFGF, TGF-b, ANG2 and PDGF, secreted by HCC cells to promote VEGFindependent angiogenesis was dependent on FAT10.FAT10 À/À HCC cells were used to establish orthotopic liver xenograft models in nude mice, and BV was used to neutralize VEGF secreted from FAT10 À/À HCC cells in response to hypoxia in tumor tissues (Fig. 2F).IVIS and B-US evaluation showed no increase in FAT10 À/À HCC tumor growth after 10 days of BV treatment (Fig. 2L-M and supplementary Fig. 2M-N).VEGF mRNA increased in FAT10 À/À HCC cells and VEGF protein decreased in FAT10 À/À HCC cellderived tumor tissues after BV treatment (Fig. 2N and supplementary Fig. 2O).However, levels of mRNA and protein for bFGF, TGF-b, ANG2 and PDGF were not altered by BV treatment (Fig. 2O and supplementary Fig. 2P).IHC analysis revealed that although hypoxia and macrophage infiltration were higher after BV treatment compared with before treatment, the vascular density did not differ significantly in FAT10 À/À tumor tissues before and after BV treatment (Fig. 2P and supplementary Fig. 2Q).Taken together, our data confirmed that the simultaneous upregulation of FAT10dependent non-VEGF proangiogenic signals, bFGF, TGF-b, ANG2 and PDGF, secreted by HCC cells promoted VEGF-independent angiogenesis in tumor tissues.
Suppressing BV-induced VEGF-independent angiogenesis enhances its efficacy by inhibiting multiple FAT10-mediated non-VEGF signals in HCC cells
Next, we wanted to explore whether the simultaneous upregulation of FAT10-mediated non-VEGF proangiogenic signals could compensate for the function of BV-blocked VEGF signaling in HCC cells.To this end, we investigated the relationships between the protein concentrations of VEGF, bFGF, TGF-b, ANG2, and PDGF in tumor tissues and the status of tumor angiogenesis in BVtreated and BV-untreated mice inoculated with FAT10 +/+ HCC cells.Lower VEGF protein and higher bFGF, TGF-b, ANG2 and PDGF protein was found in BV-treated FAT10 +/+ HCC tumor models compared with untreated groups (Fig. 3A).Furthermore, vascular imaging with microcomputed tomography (micro-CT) revealed that tumor neovascularization did not differ significantly in the BV-treated FAT10 +/+ HCCLM3 tumor model compared with untreated groups in the fourth week (Fig. 3B).These results indicated that BV treatment induced the simultaneous upregulation of multiple FAT10-mediated non-VEGF proangiogenic signals, including bFGF, TGF-b, ANG2, and PDGF, which can replace the BV-blocked VEGF signal in HCC cells.This effect resulted in the enhancement of VEGF-independent angiogenesis accelerated by these non-VEGF proangiogenic signals in HCC cells to compensate for the inhibition of VEGF-induced angiogenesis in BV-treated mice inoculated with FAT10 +/+ HCC cells.However, the same compensatory phenomenon was not seen in BV-treated mice inoculated with FAT10 À/À HCC cells.ELISA showed that compared with that in the BV-untreated groups, the protein concentration of VEGF was significantly lower in the BV-treated FAT10 À/À HCC cell models, although no significant changes were observed in the protein concentrations of bFGF, TGF-b, ANG2, and PDGF, regardless of the treatment type, in mice inoculated with FAT10 À/À HCC cells (Fig. 3C).Significantly reduced tumor angiogenesis in the BVtreated FAT10 À/À HCCLM3 tumor model relative to untreated controls was observed by micro-CT in the sixth week (Fig. 3D).Therefore, it is suggested that the antitumor effects of BV were limited in the FAT10 +/+ HCC cell model because BV aggravated hypoxia and inflammation and promoted FAT10 expression in HCC cells.Increased FAT10 expression stimulated the proangiogenic VEGF, bFGF, TGF-b, ANG2 and PDGF in HCC cells.BV blocked the secreted VEGF signal but upregulation of FAT10-mediated non-VEGF proangiogenic signals replaced the BV-blocked VEGF signal and enhanced VEGF-independent angiogenesis compensated for the loss of VEGF-induced angiogenesis, limiting the effectiveness of BV treatment.In contrast, the simultaneous upregulation of bFGF, TGF-b, ANG2, and PDGF in response to inflammation and hypoxia was FAT10-dependent.Additional VEGF secreted by FAT10 À/À HCC cells in response to hypoxia was neutralized by BV, resulting in significant antitumor effects of BV in the FAT10 À/À HCC cell models.
Furthermore, we sought to investigate whether the inhibition of BV treatment-induced simultaneous upregulation of multiple FAT10-mediated non-VEGF proangiogenic signals, including bFGF, TGF-b, ANG2, and PDGF, which are secreted by HCC cells, could suppress BV treatment-induced VEGF-independent angiogenesis to enhance BV efficacy.HCCLM3 cells were subcutaneously injected into nude mice.When the tumor volume was approximately 100 mm 3 , the mice were treated with either BV alone or in combination with aspirin, as the latter strategy inhibits local tissue inflammation [27].Compared with treatment with BV alone, co-treatment with BV and aspirin significantly reduced tumor growth in the HCCLM3 cell model (Fig. 3E).ELISA of the tumor tissues revealed that in the HCCLM3 cell model, the protein concentrations of TNF-a, IFN-c, and interleukin (IL)-6 were significantly lower in the co-treated group than in the group treated with BV alone (Fig. 3F).qRT-PCR analysis showed that the mRNA levels of FAT10, VEGF, bFGF, TGF-b, ANG2, and PDGF in HCCLM3 cells were significantly lower following co-treatment than following treatment with BV alone (Fig. 3G).ELISA of the tumor tissues showed no significant changes in the VEGF protein concentration following either treatment; however, the protein concentrations of bFGF, TGF-b, ANG2, and PDGF were significantly lower in the cotreated HCCLM3 model compared with the BV-treated model (Fig. 3H).No significant difference in tumor hypoxia was found by IHC analysis of the two treatment groups but macrophage infiltration, FAT10 expression and vascular density were lower in the co-treated HCCLM3 tumor model in the fifth week (Fig. 3I).
Micro-CT analysis revealed less neovascularization after cotreatment in the fifth week (Fig. 3J).Similar results were observed in the SMMC-7721 cell model (supplementary Fig. 3A-E).These results indicated that co-treatment with aspirin suppressed BVinduced inflammation and downregulated FAT10-mediated non-VEGF proangiogenic signals, resulting in the inhibition of VEGFindependent angiogenesis promoted by BV treatment, enhancing BV efficacy in inhibiting angiogenesis.
FAT10 overexpression enhances the activation of different signaling pathways by upregulating multiple proteins in HCC cells simultaneously
Next, we sought to elucidate why BV treatment could increase FAT10 expression to upregulate the expressions of VEGF, bFGF, TGF-b, ANG2, and PDGF in HCC cells.The activation of multiple angiogenesis-related signaling pathways, such as the hypoxiainducible factor 1 (HIF1) a, b-catenin, signal transducer and activator of transcription 3 (STAT3), and NF-jB pathways, plays an important role in regulating the expressions of VEGF, bFGF, TGFb, ANG2, and PDGF in tumor cells [28][29][30][31][32][33].Thus, we further investigated the relationships between the expression of FAT10 and the activation of these pathways in HCC cells.RNA-seq analysis of HCCLM3 cell orthotopic liver transplant models showed that the activation of hypoxia-related gene, inflammation-related gene, and FAT10, VEGF, bFGF, TGF-b, ANG2, and PDGF expressions was enhanced and that the downstream target genes of the HIF1a, bcatenin, STAT3, and NF-jB signaling pathways were upregulated in the post-BV treatment group compared with the pre-BV treatment group (Fig. 4A).IHC analysis showed that FAT10 expression increased, along with simultaneous increases in the nuclear transcription of HIF1a, b-catenin, STAT3, and p65, in HCC cells in the post-BV treatment groups compared with the pre-BV treatment groups (Fig. 4B and supplementary Fig. 4A).In contrast, IHC anal-ysis showed that FAT10 expression decreased, along with simultaneous inhibition of the nuclear transcription of HIF1a, b-catenin, STAT3, and p65, in HCC cells in the co-treatment (BV and aspirin) groups compared with the BV treatment groups (Fig. 4C and supplementary Fig. 4B).Thus, these results indicated that FAT10 overexpression simultaneously enhanced the activation of the HIF1a, b-catenin, STAT3, and NF-jB signaling pathways in HCC cells in tumor tissues.
In addition, b-catenin, STAT3, HIF1a and TAB3 proteins influence transcriptional activity of the b-catenin, STAT3, HIF1a and NF-jB signaling pathways in tumor cells [19,31,34,35].Thus, we sought to further explore whether FAT10 overexpression can simultaneously enhance the transcriptional activity of the bcatenin, STAT3, HIF1a, and NF-jB signaling pathways by simultaneously increasing the protein levels of b-catenin, STAT3, HIF1a, and TAB3 in HCC cells.Western blot analysis of orthotopic liver xenograft tumor tissues revealed increased FAT10 expression, along with upregulation of b-catenin, STAT3, HIF1a, and TAB3 protein expression, in the post-BV treatment groups compared with the pre-BV treatment groups (Fig. 4D and supplementary Fig. 4C).In contrast, western blot analysis of subcutaneous tumorigenesis model revealed decreased FAT10 expression, along with downregulation of b-catenin, STAT3, HIF1a, and TAB3 protein expression, in the co-treatment (BV and aspirin) groups compared with BV alone (Fig. 4E and supplementary Fig. 4D).Furthermore, western blot analysis showed that compared with that in control cells, the protein expressions of FAT10, b-catenin, STAT3, HIF1a, and TAB3 were lower in shFAT10 HCC cells and higher in Flag-FAT10 HCC cells (Fig. 4F and supplementary Fig. 4E).In addition, FAT10 overexpression was shown to increase b-catenin, STAT3, HIF1a and TAB3 protein and enhance transcriptional activity of the respective signaling pathways by western blotting analysis and luciferase reporter assay in HCC cells (Fig. 4G and supplementary Fig. 4F).Further studies showed that downregulation of FAT10 inhibited the transcriptional activity of the b-catenin, STAT3, HIF1a, and NF-jB signaling pathways; in contrast, FAT10 upregulation attenuated this inhibition in HCC cells (Fig. 4H and supplementary Fig. 4G).Taken together, our data confirmed that FAT10 overexpression simultaneously enhanced the activation of the bcatenin, STAT3, HIF1a, and NF-jB signaling pathways by simultaneously increasing the protein levels of b-catenin, STAT3, HIF1a, and TAB3 in HCC cells.
FAT10 simultaneously stabilizes multiple substrates by antagonizing their ubiquitination in HCC cells
Subsequently, we explored the mechanisms by which FAT10 overexpression can simultaneously increase the protein levels of b-catenin, HIF1a, STAT3, and TAB3 in HCC cells.Previous studies have indicated that these proteins are degraded by the Ubproteasome system in cells [19,34,36,37].In addition, our studies have confirmed that FAT10 stabilizes substrates by antagonizing their ubiquitination; thus, FAT10 overexpression can increase the protein levels of substrates in various cells [17,19,24,25].Based on these observations, we hypothesized that FAT10 may simultaneously stabilize multiple substrates by antagonizing their ubiquitination in HCC cells, with the result that overexpression of FAT10 can simultaneously increase the protein levels of b-catenin, HIF1a, STAT3, and TAB3 in HCC cells.Interestingly, our data confirmed that FAT10 can simultaneously stabilize the b-catenin, HIF1a, STAT3, and TAB3 proteins by antagonizing their ubiquitination in HCC cells.This conclusion was based on the following observations.First, the co-immunoprecipitation (co-IP) and confocal microscopy data revealed that FAT10 bound to b-catenin, HIF1a, STAT3, and TAB3 in HCC cells (Fig. 5A-B and supplementary Fig. 5-A-B).Second, FAT10 affected the proteasomal degradation of these proteins in HEK293T cells.Western blot analysis showed that reducing or increasing FAT10 expression altered the protein levels of b-catenin, HIF1a, STAT3, and TAB3, whereas these effects were abolished after HEK293T cells were treated with MG132 (Fig. 5C).Third, FAT10 competed with Ub to bind to these substrates.The glutathione S-transferase (GST) pulldown assay showed that as FAT10 expression increased, the levels of the FAT10-b-catenin, FAT10-HIF1a, FAT10-STAT3, and FAT10-TAB3 complexes gradually increased, whereas the levels of the Ub-bcatenin, Ub-HIF1a, Ub-STAT3, and Ub-TAB3 complexes gradually decreased (Fig. 5D).Furthermore, western blot analysis showed that the levels of the FAT10-b-catenin, FAT10-HIF1a, FAT10-STAT3, and FAT10-TAB3 complexes were lower and the levels of the Ub-b-catenin, Ub-HIF1a, Ub-STAT3, and Ub-TAB3 complexes were higher in shFAT10 cells compared with the corresponding control cells; however, these changes were reversed in Flag-FAT10 HCC cells compared with the corresponding control cells (Fig. 5E and supplementary Fig. 5C).Finally, the in vivo ubiquitination assay results showed that in HCC cells, reduced FAT10 expression increased but FAT10 overexpression decreased the ubiquitination levels of b-catenin, HIF1a, STAT3, and TAB3 compared with those in the corresponding control cells (Fig. 5F and supplementary Fig. 5D).Overall, our data confirmed that FAT10 can simultaneously stabilize multiple substrates by antagonizing their ubiquitination in HCC cells.
Discussion
As one of the earliest developed and first Food and Drug Administration (FDA)-approved anti-VEGF agents for the treatment of human cancers, the efficacy of BV in inhibiting angiogenesis is limited [38][39][40].Thus, the identification of key factors limiting the efficacy of BV and exploring the related underlying molecular mechanisms are valuable and expected to provide new mechanistic insights to facilitate the design of pharmacological strategies for antiangiogenic agents.Previous studies about limiting the efficacy of anti-VEGF mainly focus on when the single non-VEGF proangiogenic factors expression increases after blocking VEGF and then limiting its efficacy and mechanism exploration [11][12][13][14].However, several issues remain unclear.First, whether there is a key factor that simultaneously upregulates the expression of multiple non-VEGF factors when VEGF is blocked, which in turn limits the efficacy of anti-VEGF, remains unclear.Second, if this key factor is present, what is the mechanism by which it leads to simultaneous upregulation of the expression of multiple non-VEGF factors?Third, whether multiple non-VEGF factors that are simultaneously upregulated when VEGF is blocked can replace the effects of the blocked VEGF and promote angiogenesis has not yet been reported.Here, our results first showed that FAT10 is a key factor limiting the efficacy of BV by controlling the simultaneous upregulation of multiple non-VEGF factors, including bFGF, TGF-b, ANG2, and PDGF, in HCC cells.This conclusion is based on the following obser-vations.First, in vivo results in subcutaneous tumor models in nude mice established with FAT10 +/+ and FAT10 À/À HCC cells showed that BV exerted significant antitumor effects in the FAT10 À/À HCC cell models but not in the FAT10 +/+ HCC cell models.Second, BV treatment-aggravated hypoxia and inflammation in tumor tissues mediated the simultaneous upregulation of FAT10-dependent bFGF, TGF-b, ANG2, and PDGF in HCC cells.Further studies have demonstrated that the simultaneous upregulation of bFGF, TGF-b, ANG2, and PDGF in HCC cells induced by either proinflammatory cytokines or hypoxia depends on FAT10.Third, when HCC cell-secreted VEGF signal was blocked by BV, the simultaneous upregulation of multiple FAT10-dependent non-VEGF proangiogenic signals, including bFGF, TGF-b, ANG2, and PDGF, which are secreted by HCC cells, promoted VEGF-independent angiogenesis.Furthermore, our data showed that after BV treatment, BV treatment-induced simultaneous upregulation of FAT10-mediated bFGF, TGF-b, ANG2, and PDGF could replace the function of BVblocked VEGF signal in HCC cells.This effect resulted in the enhancement of VEGF-independent angiogenesis, which was accelerated by non-VEGF proangiogenic signals to compensate for the inhibition of VEGF-induced angiogenesis in BV-treated mice inoculated with HCC cells.Finally, in vivo data showed that the inhibition of BV treatment induced the simultaneous upregulation of FAT10-mediated bFGF, TGF-b, ANG2, and PDGF in HCC cells, leading to the inhibition of BV-induced VEGF-independent angiogenesis and thereby significantly enhancing the efficacy of BV in inhibiting angiogenesis.
Increasing the protein levels of HIF1a, b-catenin, STAT3, and TAB3 can enhance the transcriptional activity of the HIF1a, bcatenin, STAT3, and NF-jB signaling pathways in tumor cells [19,31,34,35].Enhanced activation of these pathways can increase the levels of VEGF, bFGF, TGF-b, ANG, and PDGF in tumor cells [28][29][30][31][32][33].In this study, our results showed that after BV treatment, the increase in FAT10 expression enhanced the activation of the HIF1a, b-catenin, STAT3, and NF-jB signaling pathways in HCC cells, along with the upregulated protein levels of HIF1a, b-catenin, STAT3, and TAB3 in tumor tissues simultaneously.In addition, FAT10 overexpression simultaneously increased the protein levels of HIF1a, bcatenin, STAT3, and TAB3 in HCC cells.Furthermore, our results confirmed that FAT10 could affect the transcriptional activities of HIF1a, b-catenin, STAT3, and NF-jB by regulating their protein levels in HCC cells.Our previous studies have confirmed that FAT10 stabilizes its substrates by antagonizing ubiquitination [17,19,24,25].However, it was unclear whether FAT10 has the function of simultaneously stabilizing multiple substrates by antagonizing ubiquitination of substrates.Interestingly, our results confirmed for the first time that FAT10 can simultaneously stabilize the HIF1a, b-catenin, STAT3, and TAB3 proteins by antagonizing their ubiquitination in HCC cells.Therefore, by combining in vivo and in vitro results, we established a mechanistic model describing how FAT10 in HCC cells limits the efficacy of anti-VEGF therapy by accelerating VEGF-independent angiogenesis.Following BV treatment, aggravated hypoxia and inflammation in tumor tissues increased FAT10 expression in HCC cells.FAT10 exerted the function of simultaneously stabilizing multiple substrates, which resulted in increased FAT10 to simultaneously upregulate the protein levels of multiple substrates, including HIF1a, b-catenin, STAT3 and TAB3 in HCC cells.This effect simultaneously enhanced the transcriptional activity of the HIF1a, bcatenin, STAT3, and NF-jB signaling pathways, thus simultaneously increasing the levels of VEGF, bFGF, TGF-b, ANG2, and PDGF in HCC cells.Although BV neutralized HCC cell-secreted VEGF protein, it induced simultaneous upregulation of multiple FAT10mediated non-VEGF proangiogenic signals secreted by HCC cells, including bFGF, TGF-b, ANG2, and PDGF, which compensated for
Fig. 2 .
Fig. 2. Simultaneous upregulation of multiple FAT10-dependent non-VEGF signals secreted by HCC cells can promote VEGF-independent angiogenesis.A mRNA expression in Flag-FAT10 HCCLM3 and control cells incubated with anti-VEGF antibody for 24 h.*p < 0.05; **p < 0.01; ***p < 0.001.B Concentrations of proangiogenic factors determined by ELISA.***p < 0.001; NS: not significant.C Tube formation by HUVECs.Scale bars: 50 lm.D Statistical histogram of the relative total tube length and number of branches.***p < 0.001.E Real-time cellular analysis of HUVEC proliferation.***p < 0.001.F Schematic diagram of orthotopic liver implantation of cancer in nude mice model before or after 10 days of BV treatment.G The orthotopic implantation of HCCLM3 cells into the livers of nude mice were evaluated by in vivo fluorescence before and after BV treatment.Data are shown as mean ± standard deviation (n = 6).***p < 0.001.H The volumes of orthotopic liver implantation in nude mice mode were measured by B-ultrasound.***p < 0.001.I mRNA expression levels of FAT10 and proangiogenic factors.***p < 0.001.J Concentrations of proangiogenic factors determined by ELISA.***p < 0.001.K IHC images of hypoxyprobe (HP), Macrophage Marker (F4/80), Endothelial cell marker (CD31) and FAT10.Scale bars, 200 lm and 50 lm.***p < 0.001.L The orthotopic implantation of FAT10 À/À HCCLM3 cells into the livers of nude mice were evaluated by in vivo fluorescence before and after BV treatment.Data are shown as mean ± standard deviation (n = 6).NS, not significant.M The volumes of orthotopic liver implantation in nude mice mode were measured by B-ultrasound.NS, not significant.N VEGF mRNA and protein by qRT-PCR and ELISA.***p < 0.001.O bFGF, TGF-b, ANG2 and PDGF mRNA and protein by qRT-PCR and ELISA.NS: not significant.P IHC images of HP, F4/80, CD31 and FAT10.Scale bars, 200 lm and 50 lm.***p < 0.001; NS, not significant.
Fig. 4 .
Fig. 4. Overexpression of FAT10 enhances the activation of different signaling pathways by upregulating multiple proteins in HCC cells simultaneously.A RNA-seq analysis of HCCLM3 cell orthotopic xenograft model in nude mice pre-and post-BV treatment.Self-organizing heat-map of hypoxia-, inflammation-, angiogenesis-related genes and the downstream target genes of the HIF1a, b-catenin, STAT3 and NF-jB signaling pathways.B IHC images of FAT10, HIF1a, b-catenin, STAT3 and p65 in HCCLM3 tumor tissues from mice treated pre-and post-BV.Scale bars, 200 lm and 50 lm.**p < 0.01; ***p < 0.001.C IHC images of FAT10, HIF1a, b-catenin, STAT3 and p65 in HCCLM3 tumor tissues from mice treated with either BV alone or in combination with aspirin.Scale bars, 200 lm and 50 lm.**p < 0.01; ***p < 0.001.D The protein expression of the investigated genes in HCCLM3 tumor tissues from mice treated pre-and post-BV were measured by western blotting.E The protein expression of the investigated genes in HCCLM3 tumor tissues from mice treated with either BV alone or in combination with aspirin were measured by western blotting.F Proteins in HCCLM3 cells with different levels of FAT10 expression.G The protein expression of the investigated genes and relative luciferase activity levels were analysed in HCCLM3 cells transfected with Flag-FAT10 plasmid.**p < 0.01; ***p < 0.001.H The protein expression of the investigated genes and relative luciferase activity driven by these promoters levels of the investigated genes in HCCLM3 cells were determined.(a: shNC group; b, shFAT10 group; c, shFAT10 + HA-HIF-1a group or shFAT10 + HA-b-catenin group or shFAT10 + HA-STAT3 group or shFAT10 + HA-TAB3 group).**p < 0.01; ***p < 0.001; NS, not significant.
Fig. 5 .
Fig. 5. FAT10 simultaneously stabilizes multiple substrates by antagonizing their ubiquitination in HCC cells.A Co-IP revealed the direct interaction between endogenous FAT10 and HIF1a, b-catenin, STAT3 and TAB3 proteins in HCCLM3 cells.B Confocal microscopy to show subcellular co-localization of FAT10 (red) and HIF1a, bcatenin, STAT3 and TAB3 (green) in HCCLM3 cells with DAPI nuclear staining (blue).Scale bars: 10 lm.C: Western blot analysis of FAT10 and HIF1a, b-catenin, STAT3 and TAB3 proteins with different FAT10 expression in HEK293T cells after treatment with 10 lM MG132 or vehicle control for 24 h.D Binding of HIF1a, b-catenin, STAT3 and TAB3 during the course of the competition were analyzed by GST pulldown experiment.E HCCLM3 cells were transfected with the indicated plasmid, and Co-IP analysis were performed to detect alteration of complexes.F Ubiquitination of HIF1a, b-catenin, STAT3 and TAB3 in transfected HCCLM3 cells treated with MG132.(For interpretation of the references to colour in this figure legend, the reader is referred to the web version of this article.) | 2023-06-18T06:17:07.223Z | 2023-06-01T00:00:00.000 | {
"year": 2023,
"sha1": "9e0265117bbe02ea45eb4f27893ea31121d131f9",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.jare.2023.06.006",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e6e75000f2f2105b881fcea6f25c953fd4d29930",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
119286005 | pes2o/s2orc | v3-fos-license | L-Band Spectra of 13 Outbursting Be Stars
We present new L-band spectra of 13 outbursting Be stars obtained with ISAAC at the ESO Paranal Observatory. These stars can be classified in three groups depending on the presence or absence of emission lines and the strength of Brα and Pfγ emission lines relative to those of Humphreys lines from transitions 6–14 to the end of the series. These groups are representative of circumstellar envelopes with different optical depths. For the group showing Brα and Pfγ lines stronger than Humphreys lines, the Humphreys decrement roughly follows the Menzel case-B for optically thin conditions. For the group showing comparable Brα, Pfγ, and Humphreys emission-line strengths, the Humphreys decrements moves from an optically thin to an optically thick regime at a transition wavelength that is characteristic for each star but typically is located around 3.65–3.75 μm (transitions 6–19 and 6–17). Higher-order Humphreys lines probe optically thin inner regions even in the optically thicker envelopes. We find evidence of larger broadening in the infrared emission lines compared with optical lines, probably reflecting larger vertical velocity fields near the star. The existence of the aforementioned groups is in principle consistent with the description recently proposed by de Wit et al. for Be star outbursts in terms of the ejection of an optically thick disk that expands and becomes optically thin before dissipation into the interstellar medium. Time-resolved L-band spectroscopy sampling the outburst cycle promises to be an unique tool for testing Be star disk evolution.
Introduction
Be stars are rapidly rotating dwarf or giant B type objects that show or have once shown emissions in the Hα line (Jaschek & Jaschek 1987). Classical Be stars have moderate infrared excesses that originate in the free-free and free-bound emission from ionized circumstellar gas (Gehrz, Hackwell & Jones 1974;Waters 1986). Interferometric studies have shown that this gas is concentrated towards the equatorial plane forming a dense disk-like envelope extending extending up to ≈ 10 stellar radii from the stellar surface (Grundstrom & Gies 2006, Quirrenbach et al. 1994, Stee et al. 1995. The IR region is dominated by broad and bright emission lines arising from high levels of hydrogen atoms (Briot 1981;Andrillat, Jaschek, & Jaschek 1988;Lenorzer et al. 2002a). He I, Mg II and Na I emission lines in the K-band have also been reported (Clark and Steele 2000).
The IR line optical depths and line flux ratios display a large variation from star to star (Persson & McGregor 1985) and do not correlate with the spectral type (Lenorzer et al. 2002a). Be stars are also intrinsically variable, some of them show mild periodic or irregular photometric variability (Mennickent, Vogt & Sterken 1994, Sterken, Vogt & Mennickent 1996, whereas others show sudden brightenings usually attributed to mass ejections from the surface of the stars, which can occur discretely over a range of timescales (Hubert, Floquet & Zorec 2000, Mennickent et al. 2002, de Wit et al. 2006). These outbursts probably induce variability in the opacity, the size and the geometry of the circumstellar envelope. Thus our aim is to explore the physical properties of envelopes of outbursting Be stars. To our purpose, we selected for an infrared spectroscopic study 6 Galactic Be stars showing long-lived outbursts (duration several hundred days) and 7 showing short-lived outbursts (duration days or tens of days) from the list of Hubert, Floquet & Zorec (2000).
The stars were selected spanning a wide range of projected rotational velocities and most of them have been rarely studied spectroscopically. We hope contributing to the knowledge of the L-band spectral region in Be stars and its relation with the circumstellar envelope, -4which have been hitherto scarcely studied.
Observations and data reduction
L-band spectra were obtained with the VLT Infrared Spectrometer and Array Camera (ISAAC, Moorwood et al. 1998) at the ESO Cerro Paranal Observatory in service mode during the nights of may 28-30, july 26, august 7-8 and september [3][4]2003. The mode long slit low resolution spectroscopy was selected along with a central wavelength of 3.5 µm. The pixel scale was 0.146 ′′ /pixel. Two different setups were used, a narrow slit of 0.3 ′′ and other of 2 ′′ , providing resolving power of 1200 and 180, respectively. As our targets are bright, we had to use very short exposure times (0.3-2s) so that the detector does not become saturate; this had no effect on the total number of counts recorded, since the exposure time is controlled by the detector and no shutter geometrical bias is involved during the photon acquisition. Images were reduced with the ISAAC pipeline. The spectra were telluric corrected with the aid of early G-type telluric standards observed during the run at similar airmasses that science objects using the procedure described in Maiolino, Rieke & Rieke (1996). We built our telluric templates by dividing the telluric spectra by a synthetic solar-type spectrum interpolated at the same resolution and wavelength range. Then, we used the IRAF 1 telluric task to remove telluric absorption lines from the science objects. The telluric bands were successfully removed from our spectra, except in some cases in the wavelengths short-ward of 3.4 µm, characterized by heavy and variable atmospheric absorption. Nevertheless, it has minor importance since the spectral lines used 1 IRAF is distributed by the National Optical Astronomy Observatories, which are operated by the Association of Universities for Research in Astronomy, Inc., under cooperative agreement with the National Science Foundation. in this work are mostly located out of this region. The spectra taken with the wide slit were flux calibrated with the aid of the standard star BS5471 (spectral type B3V), whose L magnitude is known. Those spectra taken with the narrow slit were continuum normalized.
The observing log given in Table 1 indicates the number of spectra per object, single spectra exposure times and some additional observational parameters.
Results
The L-band IR spectral region, between 3.0 µm L magnitudes were computed by convolution of the spectra obtained with the slit of 2 ′′ with the transmission curve of the L filter given by Bessell & Brett (1988). As part of the spectrum of µ Cen was corrupted and for KV Mus we had only one spectrum taken with the narrow slit (0.3 ′′ ) which shows no emission lines, no magnitude determinations were possible for these stars. In Table 2 we list the L magnitudes, the fundamental parameters of the program stars and their classification in the aforementioned groups. The V sin i values were taken from Hubert, Floquet & Zorec (2000), Glebocki & Stawikowski (2000), Yudin (2001) and Frémat et al. (2006). Some of our stars were too bright to determine the outburst stage (outburst/quiescence) from reliable ASAS-3 V-band light curves (Pojmański 2001) at the epoch of our L-band spectroscopy. Others were not included in such a catalog. The exception was OZ Nor, that was observed near maximum at the time of the L band spectroscopy. The Hα spectrum of V 395 Vul taken in may 23, 2003 and of V 4024 Sgr taken in may 29, 2003, suggest that our IR observations for these stars were obtained during minimum and maximum Hα emission, respectively (http://astrosurf.com/buil/becat/). Line fluxes, full width at half maximum (F W HM) and equivalent widths, measured in the narrow slit spectra using the splot IRAF task, vary strongly from star to star. In Table 3 we list these parameters for the stars showing emission lines (groups I and II). Table 2: Spectrophotometric L magnitudes, fundamental parameters from given references and 2MASS color excess of the observed Be stars. The note indicates the classification group and the outburst character (l= long, s=short, see text). V sin i is in km s −1 . (Townsend, Owocki & Howarth 2004). Keeping in mind these caveats, we observe that F W HM roughly correlates with V sin i and also is larger for higher order transitions in the Humphreys series (Figs. 4 and 5). We found correlations of the form (Table 4): We considered only correlations with R > 0.60 and at most one rejected point. These correlations suggest that rotational broadening is the main line broadening mechanism for these lines and point to a rotationally supported, probably disk-like envelope as the source of infrared line emission. Similar kinematical insights have been derived from other infrared and optical spectroscopic studies of Be stars and they have been interpreted in terms of a disk-like geometry for Be star envelopes (e.g. Sellgren & Smith 1992, Hanuschik 1996, Clark & Steele 2000, Hony et al. 2000. In this view, higher order lines probe inner disk regions, with larger rotational velocities. The fact that the F W HM is larger than 2 V sin i could indicate additional sources of broadening like turbulence, macroscopic velocity fields or electron scattering (although we do not observe prominent electron scattering wings in the -17lines). We note that the B coefficient defined in equation (1) is the expected line broadening for a star seen pole-on (V sin i = 0 km/s). If planar Keplerian motions dominate the disk kinematics, then we should expect this number to be similar to the thermal broadening in the disk (B th ≈ 13 km/s for 10.000 K hydrogen gas). However, we observe all lines in Table 4 with B >> B th . Since IR lines, especially those of the Humphreys series, probe the inner disk region, it is possible that larger turbulent motions and eventually departures of the disk geometry in the inner disk explain why the B coefficients of infrared lines are larger than those of Balmer and Fe II lines (Table 4). As we don't observe Stark broadened emission wings, we discard pressure effects as the cause for the large observed B. We failed to estimate the electron density of the envelope (and so an estimate of the pressure effect) using the Inglis-Teller formula since the quality of the data did not allow us to detect the wavelength of confluence of the Humphreys series with the continuum. The broadening effect mentioned, plus our rather small spectral resolution (R ≈ 1200), probably explains why the typical hallmarks for rotationally supported disks, viz. doubly peaked emission lines, are not observed in our spectra.
In the disk model of Be stars the peak separation ∆λ n of the nth emission line measures the velocity v n near the outer disk (Hirata & Kogure 1984): In the following we assume a disk rotational law given by: where v ⋆ is the equatorial stellar velocity and j is equal to 0.5 for a Keplerian disk and 1.0 for continuum mass loss with conservation of the angular momentum. The extension of the disk which corresponds to the nth emission line is: and the extension relative to the Hu14 forming region is: Where we have assumed a linear relation between v Hun and F W HM Hun (Hanuschik 1996 and references therein). The fact that the ratio (F W HM Hun /F W HM Hu14 ) reaches values up to 2.7 around Hu24 (Fig. 5), implies that the relative disk extension r Hu14 /r Hu24 for OZ Nor and V 341 Sge equals 3 (j=1) or 7 (j=0.5). We find that our empirical groups trace the optical depth of the circumstellar envelope.
Diagnostics for the envelope optical depth
We note the change in position of V 395 Vul (≡ 12 Vul) in two epochs. As said in Section 3, this corresponds to a weakening of the Hα emission line strength, which is consistent with a more traslucent envelope in may 2003. We also investigated the ISO spectra of the stars reported by Lenorzer, de Koter, & Waters (2002b) and published by Vandenbussche et al. (2002). We note that above log (Hu14/Brα) ≈ -0.2 all the stars (7 objects, including V 1150 Tau) can be classified as Group I, and below that hypothetical line the stars (9 objects) are Group II. 12 Vul changes between Group I and II. This shows that transition objects are difficult to find, and that the classification in groups I, II and III is in principle observationally supported. This classification has a quantitative support in the Lenorzer, de Koter, & Waters (2002b) diagram, but can be done more rapidly by simple visual inspection of the spectrum, for classification or selection purposes. We note that Be stars showing long-lived outbursts are found only in the upper right part of the diagram whereas those showing short-lived outbursts are more widely distributed. Our qualitative conclusion about the optical depth condition in the envelopes of Group I & II stars is also supported by the emission line ratio Pfγ/Hu16, which is a good discriminator between optically thin and thick conditions (Hummer & Storey 1987, Hamann & Simon 1987. We found this ratio ∼ 1 for all Group I stars and much larger than unity for Group II stars, consistent with theoretical predictions for optically thick and thin envelopes (respectively) with T = 10.000 K and n e = 10 10 cm −3 .
A quantitative interpretation of the Lenorzer et al. diagram was done by Jones et al. (2009), in terms of a disk with varying density and illuminated by a central star of given temperature T ef f . In this model the disk density profile is given by: where R is the cylindrical, radial distance from the star rotation axis, Z is the perpendicular
Study of the EW s and Humphreys emission line decrements
We find a strong correlation between EW/λ and wavelength for a given series. This parameter increases with λ and sometimes saturates at EW/λ ≈ 6 × 10 4 (Fig. 7). This kind of behavior was observed in the prototype Be star γ Cas in several infrared H I series and interpreted in terms of a decrease in the line source function in the outer parts of the circumstellar region (Hony et al. 2000). Here we demonstrate that this is an usual behavior of Group-I and Group-II stars, being the pattern only disrupted by the larger emission found in the Brα and Pfγ lines of Group-II stars.
We have studied f (Hu n)/f (Hu19) (n is the quantum number of the upper level) versus λ for every star showing emission lines. We find that the Humphreys decrements follow well defined patterns (Fig. 8). For the stars of Group II, they increase with λ, indicating optically thin conditions, as in the case of β Monocerotis A (Sellgren & Smith 1992). On the contrary, for stars of Group I, the decrements change their behavior at a transition wavelength, λ 0 , from optically thin conditions (at shorter wavelengths) to optically thick conditions (at longer wavelengths). This transition is rather fast, and λ 0 , given in Table 3, apparently is a characteristic of every star. As in the case of EW/λ, the transition at λ 0 could reflect a change in the optical properties of the envelopes at a certain distance of the star. In this scheme, V 1150 Tau could be classified as a Group-I star. We note that even for Be stars with optically thick envelopes, high order Humphreys lines probe optically thin inner regions.
From Fig. 5 and 7 we deduce a rough anticorrelation between F W HM and EW/λ for the Humphreys lines of a given star, as usually happens for the Hα line in Be stars (Dachs et al. 1986). Consequently, this can be explained assuming that the EW scales with the disk size and that higher order lines are formed in smaller and inner disk-like regions rotating faster than the outer and bigger regions forming low order Humphreys lines.
2MASS photometry and continuum slope
For each star we have calculated E(H − K) = (H − K) obs − (H − K) 0 , where the first color is from 2MASS (Skrutskie et al. 2006) and the last one is given by Koornneef (1983) for a star of similar spectral type and luminosity class (Table 2). We have not corrected them by interstellar extinction, which is expected to be small in infrared wavelengths, especially for our bright and relatively close stars. Keeping in mind that variability could affect the comparison of non-simultaneous data, we find no correlation between color excess, outburst character and group membership.
We measured the flux ratio in regions almost depleted of emission lines: where S λ is the spectral flux density. This ratio is bluer for hotter stars and for a given temperature is bluer for stars without emission lines (i.e. Group 3 stars, Fig. 9). No correlation is observed between continuum slope and emission line strength. These findings possibly point to the importance of the stellar flux in the infrared continuum emission; hotter stars produce bluer continuum and when the disk becomes developed (Group I-II stars) this continuum becomes redder, probably due to the contribution of free-free emission and partial stellar obscuration from the optically thicker disk. The position of V 457 Sct is notable. This star shows the bluest continuum, and no developed emission. However, the absence of Brα and Pfγ absorptions indicate that residual emission is filling these lines. Incidentally, V 1448 Aql also show these characteristics; a very blue continuum and filled absorption lines. We speculate that these stars could be in a special position of their eruptive cycles, such as in the process of ejecting a hot optically thick disk. The disk could be later dissipated as an optically thin ring into the interstellar medium, as proposed in the model of de Wit et al. (2006), causing the stars' position to move down in the diagram. The bluer color during the outburst ascending branch are predicted by this model, and observed in Be outbursting stars (de Wit et al. 2006). The stars should move up and down in the diagram of Fig. 9 during the outburst rising and decay. We note that Be star outbursts seem to be of larger amplitude in red bands (Mennickent et al. 2002) so this effect in our studied region should be significant. Further time resolved spectroscopy sampling the whole eruptive cycle is needed to test this conjecture for Be stars.
Dust around outbursting Be stars?
We note the absence of dust spectral features at the L band, especially the Polycyclic Aromatic Hydrocarbons (PAH) emission feature at 3.3 µm and the nano-diamond features at 3.43 and 3.52 µm, that have been detected in some Herbig Ae/Be stars. These features have been observed in pre-main sequence stars of spectral type B9 and later but for earlier spectral types the 3.3 µm band is weak or absent and the other bands are even weaker (Habart et al. 2004, Acke & van den Ancker 2006. As our stars are mostly early B-type, we cannot establish the absence of dusty envelopes from the absence of these key spectral features. However, the infrared color excesses E(H − K) of our targets listed in Table 2 and the 2MASS colors, J − K between -0.08 and +0.34 and H − K between -0.29 and 0.30, are typical for Be stars and not so large as in most hot stars surrounded by dust (e.g. Fig. 11 in Mathew, Subramaniam & Bhatt 2008). This suggests the absence of significant dust in the envelopes of these ourbursting Be stars. The IRAS colors of our targets provide the same insight. This finding is consistent with the classification of our targets as "canonical" Be stars. Our targets probably are not surrounded by massive cool envelopes as in the case of Herbig Ae/Be stars.
Conclusions
We have provided a view of the L-band spectra of a selected sample of outbursting Be stars. These spectra show no evidence of dust and are not different from those reported spectra of Be stars showing only irregular photometric variability. The observed L-band spectra of 13 outbursting Be stars can be categorized in three broad groups reflecting the optical depth conditions in the Be star envelope. Based on the relative intensity of Humphreys, Brα and Pfγ emission lines, a rapid visual inspection of the spectra indicates the envelopes optical depth. In addition, the Humphreys decrements, and the parameter λ 0 defined in Section 4.3, can be used as a diagnostic tool for the optical depth of the circumstellar envelopes. We find that higher order Humphreys lines probe optically thin inner regions even in the case of optically thick envelopes. In addition, the large broadening observed in the IR lines probably reflect vertical velocity fields near the star.
Some power-law disk models describing the infrared emission line properties fail to explain the cases of optically thick envelopes (Group I stars). We expect that our discovery of a large number of these stars motivates further theoretical work in this area. Our data does not allow us to test a possible correlation between the outburst stage and the spectral appearance, but the fact that the stars were observed at random outburst phases, the changes observed in two epochs in V 395 Vul (≡ 12 Vul) and the blue continuums of V 1448 Aql and V 457 Sct, suggest that all outbursting stars observed in this project could pass during their cycles through Groups I, II and III. The existence of these groups is in principle consistent with the proposed outburst description by de Wit et al. (2006) in terms of the ejection of an optically thick disk that expands and becomes optically thin before dissipation into the interstellar medium. Accordingly, λ 0 , and the whole spectral appareance should change in a significant way during the entire outburst cycle, following the development of the circumstellar envelope. Variability of Be stars in the diagram of Fig. 6 along the diagonal was already suggested by Lenorzer et al. (2002b) due to the transient | 2009-02-25T02:37:17.000Z | 2009-02-25T00:00:00.000 | {
"year": 2009,
"sha1": "e849cb2e14d6486db74ea0438458c7568885e99f",
"oa_license": "CCBYNCSA",
"oa_url": "http://sedici.unlp.edu.ar/bitstream/handle/10915/93467/Versi%C3%B3n_preliminar.pdf?sequence=1",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "e849cb2e14d6486db74ea0438458c7568885e99f",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
219574328 | pes2o/s2orc | v3-fos-license | The critical bedside role in identifying and treating lung injury during the COVID‐19 pandemic
Most early deaths from COVID-19 were from adult respiratory distress syndrome (ARDS) that led to multiorgan system failure (Arentz et al., 2020). COVID-19 primarily injures the vascular endothelium in such a unique way that a COVID-19 patient with ARDS (CARDS) can even die if they are young and healthy. Patients with ARDS develop stiff lungs that are difficult to ventilate without causing ventilator-induced lung injury (VILI). Through a series of clinical trials known as ARDSNet, that spanned over 20 years, clinicians were able to identify the ideal ventilator settings necessary to treat these patients. The trials revealed that low tidal volumes with high positive end-expiratory pressure resulted in less injury from the ventilator (VILI) (Acute Respiratory Distress Syndrome Network et al., 2000). More recently, another component to lung damage in the progression to ARDS has been described. In 2017, Brochard et al, in a collaboration between centres in Canada and Italy, first identified the concept known as patient self-inflicted lung injury (P-SILI) (Brochard, Slutsky, & Pesenti, 2017). This is where an initial lung injury causes capillary leak, lung oedema and impaired gas exchange. This leads to increased respiratory drive and higher tidal volumes from the patient's own spontaneous breaths. This causes more capillary leak and further damage to the lungs in a similar way that a ventilator can cause damage to lungs through VILI. After the initial onset of respiratory distress from COVID, the patient's lungs will be soft and easy to spontaneously ventilate despite very poor oxygenation (Grasselli et al., 2020). If the mechanism of P-SILI is kept in mind, the logical treatments become apparent. The patient should not be forcefully breathing, and the patient should not have a high cardiac output. The initial approach to treating the respiratory distress through non-invasive support (i.e. high-flow nasal oxygen) and patient discomfort through analgesics or anxiolytics may help by preventing excessive inspiratory efforts. If the respiratory drive cannot be reduced, persistently strong spontaneous inspiratory efforts will lead to worsening lung damage through P-SILI and eventually CARDS (Marini & Gattinoni, 2020). If this process cannot be interrupted, it may be necessary to intubate and mechanically ventilate these patients. Rates of agitation in ICU patients have been reported to be as high as 70% (Fraser, Prato, Riker, Berthiaume, & Wilkins, 2000). Deep sedation and paralysis by neuromuscular blocking agents may be necessary to prevent the high pressures that can result in VILI from patients who are “fighting the vent”. Communication difficulties, family absence and ventilator weaning have been identified as key components of the psychological toll that critical illness can take on these patients (Rotondi et al., 2002). Liberation from the ventilator and eventual extubation can be difficult in patients suffering from CARDS due to limitations placed on visitation and the required personal protective equipment for caregivers. Nurses provide a vital bedside role through reliable interpretation and management of anxiety and agitation during times of both aggressive ventilator support and weaning (Tate, Devito Dabbs, Hoffman, Milbrandt, & Happ, 2012). Effective symptom management for anxiety and agitation is associated with many improvements in patient outcomes such as more ventilator-free days and shorter lengths of stay (Campbell & Happ, 2010). As the COVID-19 pandemic continues to unfold, the knowledge of the concepts of P-SILI and VILI is essential for bedside nurses. Adequate assessment of the levels of anxiety and agitation present in these patients is vital to prevent self-inflicted and iatrogenic lung injury. Nurses, that truly know the patient, are the eyes and ears for all other caregivers. It may be necessary to provide aggressive treatments that decrease the damage being done to the lungs through spontaneous breathing. Only the bedside nurse can provide the vital clues to balance the necessary support. Recognizing and treating these symptoms early could be the key to improving outcomes in patients with COVID-19 infections. The severity and breadth of this global pandemic must not sway or deter us from the basic tenets of bedside patient comfort and succour.
The critical bedside role in identifying and treating lung injury during the COVID-19 pandemic
Most early deaths from COVID-19 were from adult respiratory distress syndrome (ARDS) that led to multiorgan system failure (Arentz et al., 2020). COVID-19 primarily injures the vascular endothelium in such a unique way that a COVID-19 patient with ARDS (CARDS) can even die if they are young and healthy. Patients with ARDS develop stiff lungs that are difficult to ventilate without causing ventilator-induced lung injury (VILI). Through a series of clinical trials known as ARDSNet, that spanned over 20 years, clinicians were able to identify the ideal ventilator settings necessary to treat these patients. The trials revealed that low tidal volumes with high positive end-expiratory pressure resulted in less injury from the ventilator (VILI) (Acute Respiratory Distress Syndrome Network et al., 2000).
More recently, another component to lung damage in the progression to ARDS has been described. In 2017, Brochard et al, in a collaboration between centres in Canada and Italy, first identified the concept known as patient self-inflicted lung injury (P-SILI) (Brochard, Slutsky, & Pesenti, 2017). This is where an initial lung injury causes capillary leak, lung oedema and impaired gas exchange.
This leads to increased respiratory drive and higher tidal volumes from the patient's own spontaneous breaths. This causes more capillary leak and further damage to the lungs in a similar way that a ventilator can cause damage to lungs through VILI.
After the initial onset of respiratory distress from COVID, the patient's lungs will be soft and easy to spontaneously ventilate despite very poor oxygenation (Grasselli et al., 2020). If the mechanism of P-SILI is kept in mind, the logical treatments become apparent. The patient should not be forcefully breathing, and the patient should not have a high cardiac output. The initial approach to treating the respiratory distress through non-invasive support (i.e. high-flow nasal oxygen) and patient discomfort through analgesics or anxiolytics may help by preventing excessive inspiratory efforts. If the respiratory drive cannot be reduced, persistently strong spontaneous inspiratory efforts will lead to worsening lung damage through P-SILI and eventually CARDS (Marini & Gattinoni, 2020).
If this process cannot be interrupted, it may be necessary to intubate and mechanically ventilate these patients. Rates of agitation in ICU patients have been reported to be as high as 70% (Fraser, Prato, Riker, Berthiaume, & Wilkins, 2000). Deep sedation and paralysis by neuromuscular blocking agents may be necessary to prevent the high pressures that can result in VILI from patients who are "fighting the vent". Communication difficulties, family absence and ventilator weaning have been identified as key components of the psychological toll that critical illness can take on these patients (Rotondi et al., 2002). Liberation from the ventilator and eventual extubation can be difficult in patients suffering from CARDS due to limitations placed on visitation and the required personal protective equipment for caregivers. Nurses provide a vital bedside role through reliable interpretation and management of anxiety and agitation during times of both aggressive ventilator support and weaning (Tate, Devito Dabbs, Hoffman, Milbrandt, & Happ, 2012). Effective symptom management for anxiety and agitation is associated with many improvements in patient outcomes such as more ventilator-free days and shorter lengths of stay (Campbell & Happ, 2010).
As the COVID-19 pandemic continues to unfold, the knowledge of the concepts of P-SILI and VILI is essential for bedside nurses.
Adequate assessment of the levels of anxiety and agitation present in these patients is vital to prevent self-inflicted and iatrogenic lung injury. Nurses, that truly know the patient, are the eyes and ears for all other caregivers. It may be necessary to provide aggressive treatments that decrease the damage being done to the lungs through spontaneous breathing. Only the bedside nurse can provide the vital clues to balance the necessary support. Recognizing and treating these symptoms early could be the key to improving outcomes in patients with COVID-19 infections. The severity and breadth of this global pandemic must not sway or deter us from the basic tenets of bedside patient comfort and succour. | 2020-05-28T09:13:09.563Z | 2020-05-27T00:00:00.000 | {
"year": 2020,
"sha1": "a89da041567c2d7f91dcd8f5af88587a02e7ab28",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/nop2.525",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "43c6da6ca9d7e76e84395f5c9809e23c7154a817",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
14069400 | pes2o/s2orc | v3-fos-license | Classification of Smartphone Users Using Internet Traffic
Today, smartphone devices are owned by a large portion of the population and have become a very popular platform for accessing the Internet. Smartphones provide the user with immediate access to information and services. However, they can easily expose the user to many privacy risks. Applications that are installed on the device and entities with access to the device's Internet traffic can reveal private information about the smartphone user and steal sensitive content stored on the device or transmitted by the device over the Internet. In this paper, we present a method to reveal various demographics and technical computer skills of smartphone users by their Internet traffic records, using machine learning classification models. We implement and evaluate the method on real life data of smartphone users and show that smartphone users can be classified by their gender, smoking habits, software programming experience, and other characteristics.
INTRODUCTION
In recent years, the number of smartphone users has rapidly increased. According to a report published by Smart Insights 1 , the number of smartphone users grew from 400 million users in 2007, to more than 1,800 million in 2015. In addition, the report claims that at the end of 2015, 97% of adults, aged 18 to 34, in the US were mobile device users. The mobility and capabilities of smartphones make them a very popular platform for Internet usage. According to [1], approximately two thirds of the adult population (ages 16 and over) in the UK use smartphones to go online, and the number increases to 90% among adults aged 16 to 34. The various functionalities of smartphones make them very useful devices, however these capabilities also pose a great privacy risk to smartphone users [2]. In many cases, smartphone users store sensitive information such as private photos and passwords on their devices. Moreover, smartphones give applications access to sensors such as GPS, gyroscope, and accelerometer. These, and other sensors, can be used to reveal information about the user, including activity recognition [3] and demographic properties (e.g., gender) [4], by malicious applications installed on the device. However, the privacy risks smartphone users are exposed to are not limited to device applications. The Internet exposes smartphone users to many other entities that may violate user privacy. Public Wi-Fi networks, ISP provid-ers, VPN (virtual private network) services, and proxy servers are examples of entities that have access to the Internet traffic of smartphone users. This traffic may contain sensitive information transmitted in plain text (e.g., HTTP forms). However, recent studies show that private information regarding the user may be extracted from Internet traffic using machine learning techniques. In [5], the authors extract statistical and application and category-based features from the traffic and use location properties of the hotspot to show how public Wi-Fi can reveal the gender and education of its users. In [6], a scenario where remote entities with access to the user's smartphone Internet (e.g., VPN services) can use it to identify the type of venue (home, organization, hangout, and waiting place) the user is located at. In this paper, we present a method for classifying smartphone users by various demographic properties and computer technical skills. The method analyzes and aggregates smartphones' Internet traffic records to extract features that represent the smartphone user. The feature extraction process uses feature extraction techniques that were introduced in [5] and [7], which are enriched with additional new features defined in this study. By applying a supervised machine learning approach, we were able to classify smartphone users by 10 different properties including their gender, age group, and education. The method was demonstrated and evaluated on real data (network traffic) of 143 smartphone users collected during 2014 and 2015; for example, we were able to classify the users by their gender and software programing experience with an accuracy of 83.9% and 77.8%, respectively.
METHOD
Mobile Internet traffic datasets are not publicly available due to their sensitive nature in terms of privacy. Therefore, first we had to collect such data from smartphone users by conducting an experiment. In addition, at the start of the experiment, the users were asked to complete a questionnaire to tell us about themselves and their technical computer skills. After the data was collected, the traffic records were processed and aggregated. We extracted four main groups of features, as well as the set of demographic and technical computer skills of the subjects that were used as labels. Finally, a supervised machine learning framework was applied to train and evaluate classification models on the data obtained during the experiment.
Experiment Set-Up
143 students with Android devices participated in the experiment. We divided the experiment into four parts. The first two parts were conducted during 2014, and consisted of a month-long data collection involving 17 subjects, and a slightly longer (two months) data collection process involving 61 subjects. The other two parts of the experiment took place during 2015 and involved a total of 65 subjects. At the start of the experiment all of the subjects had to complete a questionnaire provide, in order to provide their demographical characteristics (e.g., age and gender) and technical skills regarding computers and programming (e.g., whether the subject ever wrote a short code or formatted a computer). Figure 1 presents the distribution of the subjects for several demographic and technological properties. The subjects were requested to install a VPN (virtual private network) client on their devices called OpenVPN Connect which is available on the Google Play application market. The VPN client was used to redirect the subjects' Internet traffic through the experiment's dedicated VPN server where the traffic was recorded and stored until the end of the experiment. A unique configuration was set to each subject's device, so the subject's traffic could be distinguished from other subjects' traffic on the server side. The subjects were requested to stay connected to the VPN server continuously during the entire experimental period. However, disconnection events occurred often, due to many reasons such as poor network signal, change of networks (e.g., user switched from Wi-Fi to 3G connection), issues with the Android VPN API, and subject initiated disconnections. Note that the experiment was approved by the university ethical committee.
Data Processing
Once the data collection experiment was completed, the data was transferred securely to an analytical server for data processing as follows. First, the traffic records were aggregated into sessions in a manner similar to the method introduced in [7]. Sessions were defined as one of the following: a TCP session (SYN to FYN) or a UDP request and its response. Then, for every session we extracted features from four different feature groups: statistical features, application layer features, domain features, and deep packet inspection features: Statistical Features -A subset of the feature set that was introduced in [7]. Mainly consisting of the traffic volume features: transmitted and received packets' size statistics (max, min, mean, median, and variance), the number of bytes within a session (total, transmitted, and received), and the ratio between transmitted and received traffic. We decided not to use features that represent networks' quality of service, which were introduced in [7] (e.g., the number of retransmitted packets or the interval time between packets), since the focus of the study is to classify the user and not the network. Application Layer Features -We focused on the two most common protocols in the data: HTTPS and HTTP. In HTTPS sessions we examined the SSL/TLS version used to determine the connection's security, and we checked that the SSL certificate was not expired or self-signed. From HTTP sessions we extracted the number of cookies the client provided and the Content-Type field, similar to [7]. In addition, we parsed the User-Agent string to extract the OS version of the device. Another piece of information that we extracted from these protocols was the domain name (HTTP hostname and SSL server name). The domain names were used to ex- Fig. 1. Subjects' demographics and technical computer skills tract the domain features. Domain Features -The domain names that were extracted from the application layer headers can provide various information about the mobile user. To extract such information we used the following third party services and databases: VirusTotal, alexa rank, WoT (web of trust), BitDefender category, and "urlblacklist.com. For every session with a domain name available, we extracted the "alexa" popularity score. We used "WoT" to determine domains' security scores (good site, trustworthiness, and child safety) and security categories (scam, spam, malware or viruses, privacy risks, and phishing). In addition, a general category such as social network or education (32 possible values) was derived by combining and aggregating the BitDefender and UrlBlacklist.com domain categories. Deep Packet Inspection Features -Deep packet inspection is a complicated process where the content of the packet is analyzed. It can provide meaningful information about the subject, however it takes significant resources and effort to extract this information. We counted the number of HTTP forms, the presence of email addresses, usernames, and password fields in these forms, the number of downloaded files and their types. The deep packet inspection process included decoding GZIP encoded traffic and parsing JSON and XML files which are very common in HTTP traffic. A single session may not contain enough information for reliable profiling of users. Thus, all of the sessions that were associated with a subject were aggregated to a single instance. The aggregation process was performed as follows. For every numerical session feature, we calculated the average, median, minimal, and maximal values across all of the sessions associated with the subject. For nominal session features (e.g., domain category features), we created numerical subject features that represent the categorical value's incidence in the subject's sessions. For example, if 50 sessions were associated with a subject, of which 30 were from the "search" category and 20 were categorized as "news," the values of the "search" and "news" features for the subject were 0.6 and 0.4, respectively, and the values of the other domain category features were all 0. In addition, for each subject we extracted the ratios of traffic volumes between the most popular ports in the experiment's traffic (TCP 80 -HTTP, TCP 443 -HTTPS, and TCP 5228 -Google Play store). To form the demographic and technical computer skills dataset, the questions from the questionnaire were used as labels for the subjects. Table 1 presents the labels that were extracted for each subject and the values of the labels in the dataset.
EVALUATION
In order to classify the demographics and technical computer skills of subjects from their mobile Internet traffic, a supervised machine learning approach was chosen. We used the scikit-learn machine learning python packages to train machine learning classification models. For every label, we trained and evaluated multiple machine learning models that differ by the classification algorithm and number of features that were used to train the model. The machine learning models were based on the random forest (RF) and extra trees (ET) ensemble classification algorithms. Feature selection was performed using the Kbest by Anova F-Value algorithm (for K values of 30, 50, 80, and 120).
To evaluate the quality of the classification models, we used the leave-one-out cross-validation method. In this method the entire dataset except for a single instance is used for training, and the excluded instance is used for testing the model. This process is repeated for every instance in the dataset, and the performance measures are calculated on the classification of all of the instances. The evaluation metrics we present are the accuracy score, and the weighted AUC (WAUC), weighted precision, and weighted recall measures. In addition, we present the F1 score which combines the precision and recall metrics. Table 2 presents the evaluation results of the classification models that yielded the best F1 score for each label. The results show that it is possible to classify smartphone users by their demographic characteristics and technical computer skills.
To better understand how the Internet traffic reveals information about the smartphone user, we analyze the importance of the features in the different classification models. The importance of a feature for a model was defined as the feature's average importance in the decision trees which are part of the ensemble classification model.
The importance of a feature in a decision tree was derived by evaluating the ability of the feature to distinguish a specific class and the depth at which the feature appears in the tree (features close to the root affect more samples). For each label, the top five features (based on their importance) were extracted from the model that presented the best WAUC results.
CONCLUSIONS
In this paper, we use Internet traffic of smartphone users to classify them by various demographic characteristics and technical computer skills. We describe the feature extraction process and machine learning training scheme and implement the method on the real life Internet traffic of 143 student subjects.
The evaluation of the method shows that Internet traffic can be used to classify smartphone users and reveal information about them to entities with access to such traffic (e.g., Wi-Fi hotspots, VPN services, ISPs). Our analysis of the classification models shows that they are heavily dependent on domain features. These features represent the popularity, security, and the categories of the websites that the users communicate with. Thus, these features can be considered private information, and revealing them may violate the privacy of the user. Moreover, machine learning models are able profile users using these features and determine the demographics and technical computer skills of smartphone users. The solution provided by VPN services for mitigating such privacy violations was suggested by [5]. However, users must select their VPN service carefully, since the services themselves may violate users' privacy. Another solution suggested by [5] was generating dummy traffic. Dummy traffic can mislead classification models by manipulating the values of features, but generating such traffic on smartphones can cause performance issues, compromise the user's experience, drain the battery, and may cause the user to incur extra charges from the mobile carrier. This research is limited to a relatively small number of subjects, all of whom are students who live in the same country. Thus, the experiment's sample may not adequately represent the diversity of the smartphone user population. Despite this, we believe that the classification results show that smartphone users can be classified by their Internet traffic. Moreover, a larger, more diverse dataset is likely to yield better results, due to a larger training set and greater variance across the demographic groups.
In the future, we intend to increase the sample size and diversity of the users and to classify smartphone users in other ways, including their security score. | 2017-01-01T08:12:49.000Z | 2017-01-01T00:00:00.000 | {
"year": 2017,
"sha1": "348b8bb19e6370ea53a1ee938327f280f4fda078",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "348b8bb19e6370ea53a1ee938327f280f4fda078",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Mathematics",
"Computer Science"
]
} |
59332072 | pes2o/s2orc | v3-fos-license | Shamanism in Contemporary Norway: Concepts in Conflict
: To choose a terminology for an investigation of shamanism in contemporary Norway is not entirely without problems. Many shamans are adamant in rejecting the term religion in connection with their practices and choose broader rubrics when describing what they believe in. When shamanism was approved as an official religion by the Norwegian government in 2012, the tensions ran high, and many shamanic practitioners refused to accept the connection between religion and shamanism. This chapter provides an account of the emic categories and connections used today by shamanic entrepreneurs and others who share these types of spiritual beliefs. In particular, the advantages and disadvantages of the term religion and how it is deployed on the ground by shamans in Norway will be highlighted.
Introduction
In the late 1980s, shamanism gained a foothold in Norway, at the same time influencing cultural life and various secular and semi-secular currents.
One aim of this chapter is to take the diversity and hybridity within shamanic practices seriously through case studies from a Norwegian setting. Overall, I try to paint a picture of shamanism in Norway in its cultural context and describe the concepts, rubrics, and connections that practitioners deploy to position themselves in a Norwegian cultural and political context. The chapter explores the dynamics through which abstract concepts and ideas find moorings in a local community and in participants' reality here and now, gradually generating distinct cultural fields. The history of shamanism provides insight into Western assumptions about religion and religiosity in general. It stands as an example of how religious labels are formed in ever changing contexts-as a by-product of broader historical processes.
Taking local practices and communities as a starting point offers rich opportunities for getting close to individual practitioners and their beliefs, visions, and creativity. Based on interviews with central persons in the shamanic environment in Norway from 2004 to 2018, as well as on fieldwork at courses, ceremonies, and festivals, this chapter will provide empirical knowledge about which notions of shamanism are used today by shamanic entrepreneurs and others who share these types of spiritual beliefs. As a folklorist and culture researcher, I aim at understanding how people create culture and form systems of meaning that organize everyday life. I seek to track changes, boundary markers, and the complex, procedural, and polysemic meanings people ascribe to their actions.
I have chosen to examine the field of shamanism in Norway ethnographically by focusing particularly on some specific contexts and personalities using interviews, observation, and document analysis as my main research tools. Even though these tools represent different approaches to the field, the combination opened the possibility for more depth as well as understanding.
Cultural analysis forms a central basis for my academic understanding. A culture analytical approach is about understanding and interpreting what is meaningful for members of a culture (Frykman and Löfgren 1979;Ehn and Löfgren 1982). It is about seeing how meaning is created and re-created. The focus is directed toward everyday reality, to the participants' lives, their experiences, and their meetings and negotiations in relation to dominant discourses. For me, cultural analysis constitutes a tool to highlight perspectives that say something about contemporary shamans' values, attitudes, and interpretations of everyday life, including shamanic activities and experiences.
Shamanism in Norway
Contemporary shamanism has become a global phenomenon with shamans in many parts of the world sharing common practices, rituals, and a nature-oriented worldview and lifestyle. The highlighting of shamanism as a universal phenomenon was inspired by the English translation of Romanian historian of religion Mircea Eliade's Shamanism: Archaic Techniques of Ecstasy (Eliade 1964). However, within this global fellowship, diversity is still its most prominent feature. Diversity is displayed in terms of the various traditions that the practitioners choose to follow and revive, in terms of practices, politics, values, and where it is all taking place. This means that studies of the dynamics of shamanic entrepreneurship in one particular place are not necessarily directly transferable to other local contexts. Although the United States can be described as the cradle of modern shamanism, the spread of shamanic religious practices and ideas to other habitats is not a uniform process but involves adaptations to local cultural and political climates.
The U.S. influence was particularly pronounced during the first stages of shamanism in Norway. Michael Harner's shamanism with its alleged Native-American base reached this region during the 1980s, along with New Age and occult impulses. Prior to the late 1990s, shamanism in Norway thus differed little from core shamanic practices developed by Michael Harner, often referred to as the pioneer of modern shamanism. Since then, however, practitioners of shamanism in Norway have been increasingly engaged in working to recover the indigenous traditions of their country and ancestors. A Sami version of shamanism has been established, along with a new focus on Norse traditions as the source for ritual creation and religious practice.
In previous studies I have traced the history of the process of giving shamanism in Norway a local flavor to the Sami author and journalist Ailo Gaup (1944Gaup ( -2014 who is considered the first shaman in contemporary Norway (see Fonneland 2010). As Galina Lindquist argues, a striking feature of shamanic performances and "an important condition of its [shamanism's] existence" is that "its performative expression ... hinges entirely on certain individuals" (Lindquist 1997, p. 189). In a Norwegian context, few other individuals have had so much to say about the development and design of the shamanic environment than Gaup. I interviewed Gaup in 2005 when the process of bringing forth and developing local expressions of shamanism was in its infancy. His story reveals both a strong influence from Harner's core shamanism and a strong desire to bring forth Sami religious traditions as the basis for religious practice in contemporary society.
In contemporary Norway, the growth in shamanic practices and expressions is reflected in, among other things, the alternative fairs that are arranged in cities across the country. At these fairs, shamans and New Age entrepreneurs market their goods and services, and the public's interest, and thus attendance, rises annually. A Sámi shamanic milieu is constantly evolving, and a growing number of Sámi shamans offer their services online (Fonneland 2010). Additionally, a wide range of new products have been developed, including courses on Sámi shamanism, on the making of ritual drums (goavddis), guided vision quests in the northern Norwegian region, healing-sessions inspired by Sámi shamanism, and-to mention one of the latest innovations-the shamanic festival Isogaisa. Finally, yet importantly, a local shamanic association concerned with the preservation of Sámi and Norse shamanic traditions was granted status as an official religious community by the County Governor of Troms on 13 March 2012. The various products-and information about them-are available through advertisements, local media coverage, Facebook groups, websites, and through local shops. A variety of Sámi ritual drums are currently offered, for instance, in tourist shops, at the annual New Age market, and on the websites of shamans (see Fonneland 2012aFonneland , 2012b. Choosing a terminology for an investigation of shamanism in contemporary Norway has been a challenge and is not entirely without problems. In contemporary society, the words shaman and shamanism have become part of the everyday language, and thousands of popular as well as academic texts have been written about the subject. In recent years, the term shaman has in Norway become an umbrella term for the Sámi noaidi (the Sámi indigenous religious specialist), as is the case with religious specialists among people referred to as "indigenous," more or less regardless of the content of their expertise and practices. 1 However, the noaidi has not always been perceived as a shaman. The word shaman is an example of the complexities often involved in translation processes over time and across space (see Johnson and Kraft 2017). The term is widely regarded as having entered Russian from the Tungus samán, transferring to German as schamane, and then into other European languages in the seventeenth century, eventually entering the academic vocabularies of anthropologists and historians of religions and related to indigenous people elsewhere (see Wilson 2014, p. 117). In the 1960s, the term spread to the neopagan milieu where the shaman is not only recognized as an indigenous religious specialist but as having abilities potentially enshrined in all humans. However, as Graham Harvey warns us, the use of the term shaman now encompasses numerous local words for shamans, each with their own particular associations (Harvey 2003, p. 1). The term shamanism, in other words, can be seen as an expression of Western scholarly denial of the complexity of "primitive" religions and the reduction of their diversity to a simplistic unity. When it comes to these types of translation processes, it is important to bear in mind James Clifford's reminder: "Translation is not transmission . . . Cultural translation is always uneven, always betrayed. But this very interference and lack of smoothness is a source of new meanings, of historical traction" (Clifford 2013, pp. 48-49).
During the past decades, several researchers have opposed the term shamanism (see, among others, Von Stuckrad 2002;Svanberg 2003;Znamenski 2007;Rydving 2011). As Fonneland, Kraft, and Lewis argue, this is partly "due to the historical trajectories and to their results, including widespread notions of shamanism as an ism" (Kraft et al. 2015, p. 2). In this chapter, I take account of emic categories and connections, focusing on which notions of "shamanism" are used today by shamanic entrepreneurs and others who share these types of spiritual beliefs. From this scholarly standpoint, I find it important to avoid entering the debate over whether shamanism is "genuine" or not. As a folklorist, I look at the invention of traditions as something ubiquitous, noting that indigenous religions also change. Tradition is not a static thing but an ongoing process. I support folklorist Sabina Magliocco who underlines, "What some scholars have called 'inventions', 'folklorism', or 'fakelore' I see as integral steps in the formation and elaboration of tradition, worthy of investigation in their own right" (Magliocco 2004, p. 10).
Terminology is an equally debated issue among shamanic practitioners. The shamans I have interviewed reject the term neoshaman, partly due to its biased tone, but primarily to designate their affinity with the past and eschew any distinction between their practices and those of ancient and indigenous cultures. From an academic point of view, the word neoshaman is nonsensical. What I can observe and know is that in contemporary Norwegian society there exist numerous shamans. What was found and which terms made sense in local indigenous communities several hundred years ago are much more complicated questions, and they are related to various scholars' interpretations of the past. 1 We lack a historiography of the noaidi's meeting with the shaman, but this meeting most likely represents a long and gradual process. The term shaman appeared in texts from the late 1800s, including in J.A. Friis's Lappisk Mythologi, Eventyr og Folkesagn (Lappic Mythology, Fairytales and Folktales) (Friis 1871), but the term first became a "standard of norm" during the 1970s (see Kraft 2016, pp. 52-53).
During my fieldwork among shamanic practitioners in the period, 2004-2018 the term that arose most in debates was the term religion. Whether shamanism should be classified as a religion or not led to intense discussions on several occasions. Until recently, many shamans, such as, for example, Ailo Gaup, adamantly rejected the term religion in connection with their practices. In keeping with his teacher Michael Harner, Gaup regarded core shamanism as a "technique" (Eliade's term) and a way of life-not as a religion. In his book The Shamanic Zone (Gaup and Gundersen 2005) Gaup wrote: I am aware that some people describe shamanism as a religion. I hear it being referred to as animism, nature religion, primeval religion or primal religion. There are still some who call it Paganism, and probably Satanism as well. The term charlatanism belongs to the more curious, although it occasionally has a certain justificationShamanism is not a religion at all for me, and I am not striving to spread a new religion. Religion is for those who cannot see for themselves. That statement is from a Nepalese shaman ( . . . ). Michael Harner says that a religion is a mixture of spirituality and politics. Shamanism is spirituality and I hope it stays as spiritual as possible. (Gaup and Gundersen 2005, p. 323) Gaup underlined a skepticism concerning the connection with what he referred to as spiritual practices and politics. He wanted his practices to be free from dogmas, laws, and institutional structures and regulations and feared that these were factors that may gradually "corrupt the message" (Gaup and Gundersen 2005, p. 324). Later in the same chapter, he noted: "If I have any religion, it is creativity" (Gaup and Gundersen 2005, p. 326). Creativity in The Shamanic Zone is highlighted as a powerful creator god that one, by activating, can free oneself from the pressure from outside and from everything that wants to capture one's attention. Gaup emphasized this further by pointing out that: "Shamanism did not arise in the same way as Christianity, Islam or Buddhism, each of them being created by a separate religious founder. This old art has been here all the time as a possibility or an original heritage innate in human beings. (Gaup and Gundersen 2005, p. 9) Shamanism is presented here as a foundation in all the world's cultures, as art, and as a spiritual heritage in all human beings. Creativity in this context becomes a key through which people can access and express the art of shamanism. To elaborate on shamanic practitioners' ideas on the concept of shamanism, I asked four female and one male shamans at the Isogaisa festival in their forties to describe their connotations to the word shamanism. 2 They point out: "Shamanism is the oldest known techniques for healing, power and insight".
"Shamanism is to be in the nature and to have the power to ask questions and to get answers" "Shamanism is that one believes in powers outside humans' control". "Shamanism is to look upon man as part of the nature-as part of the circle of life. Humans does not stand outside the circle and cannot control it.
"Shamanism implies a respect for everything living".
"Shamanism is the free mindset. To have the possibility to believe and think what you yourself want to".
In the shamans' descriptions the word religion is absent. Shamanic praxeology, ontology, and cosmology are here described in broad rubrics as ancient techniques and as holistic ways of life in close contact with and respect for nature. In addition, their quotes can be said to exhibit what Heelas termed 2 This fieldwork took place in August 2014 and was organized in cooperation with archeologists Tiina Äikäs, Wesa Perttola, and Suzie Thomas, and scholar of religion Siv Ellen Kraft. "unmediated individualism" (Heelas 1996, p. 21) by placing a high value on individual freedom and autonomy and revealing a suspicion towards institutional structures. Contrary to religion, shamanism is approached as a worldview or way of life closely linked to the individual practitioners' own inner guidance. As Ann Taves and Michael Kinsella underline in the presentation of this special issue: "To govern a way of life, a worldview does not necessarily have to be highly elaborated or rationalized or even explicitly articulated." 3 As such, both the terms worldview and way of life embrace the individual aspect that the shamans emphasize. People engaging in shamanism in Norway describe themselves as part of a community dedicated to highlighting Sámi or Norse indigenous traditions as a spiritual heritage. Rather than an organized movement with identifiable doctrines, practices, and leaders, shamanism in Norway is complex, multifaceted, and loosely organized. It shows how local pasts, places, and characters are woven into global discourses on shamanism, and in this melting pot, new forms of practices and worldviews are taking shape.
Still, neither the term worldview nor the term way of life has the official recognition held by the term religion. This became utterly clear in the processes of converting shamanism into an authorized denomination in Norway in 2012.
The Shamanistic Association-Concepts in Conflict
It matters what we call things. This fact was highlighted in the debates that followed the Norwegian governmental approval of the Shamanistic Association (SA) as an official religion 13 March 2012. In Norway, this was the first time a shamanic movement was able to obtain the status of an official religious community with the right to offer and perform life cycle ceremonies and gain financial support relative to its membership.
According to Kyrre Gram Franck, the first leader of SA on a national level, the intention behind the establishment of SA is that the association will develop into a unifying force with the ability to strengthen individuals' and groups' rights to practice shamanism. Not least, he hopes that the association will develop into a true alternative for those who adhere to shamanistic belief systems, and that the construction of life cycle ceremonies like baptisms, confirmations, weddings, and funerals will help to increase people's interest in shamanism.
The external forces of Norwegian governmental laws and regulations play an important role in shaping the Association. In the application process, governmental regulations had to be dealt with in many arenas. Initially, Gram Franck applied to the County Governor for permission to start a shamanistic organization building on a shamanistic worldview. In our conversations, Kyrre points out that the word worldview was emphasized precisely to help ameliorate emic tensions connected to the term religion. However, this proved to be difficult because of the bureaucratic system and rules regulating freedom of beliefs. If SA was going to have a chance at getting approval to perform shamanistic life cycle ceremonies, they first needed to establish themselves as a religious community. Groups applying for official status as a religious community need to frame their application according to The Religious Communities Act (Lov om trudomssamfunn og ymist anna). By doing this, they also reproduce a certain understanding of religion derived from Christian understandings of what constitutes the "core essence" of religion (Owen and Taira 2015, p. 94).
SA, then, is a construct designed to meet the requirements for the recognition of religious communities, highlighting how shamanic practices and worldviews are adapted, transformed, and changed to fit governmental regulations (see also Taira 2010). 4 To gain support, a religious community, 3 http://www.mdpi.com/journal/religions/special_issues/-ethnographies, accessed on 4 April 2018. 4 The external forces of Norwegian governmental laws and regulations have direct consequences for the design and maneuverability of shamanistic groups. The law on religious freedom has been enshrined in Norway since 1964. This has also made it possible for groups without any affiliation to the Church of Norway to be classified legally as religious communities. The idea of equal treatment based on faith and belief is rooted in the Declaration of Human Rights, which among other things highlights the equal right to freedom of religion and belief and the right to protection against all unfair discrimination on the grounds of religion and belief. The Religious Communities Act (Lov om trudomssamfunn og ymist according to the report, must "be based on common binding perceptions of existence in which man sees himself in relation to a god or one or more transcendent powers" (confession of faith) ( § 2-1, entitled to financial subsidies).
The Religious Communities Act's (Lov om trudomssamfunn og ymist anna) definition of religion favors Protestant Christian religious forms to which other religions are expected to conform to in order to gain recognition as a religion. The letter Gram Franck sent to the County Governor to establish both a national board located in Tromsø and a local shamanistic association is dated 16 January 2012. It contains certain requisite information in a number of paragraphs that deal with everything from rules for membership, to objectives, to rules for leaders of the local religious communities, to matters relating to the design of the Association's life cycle ceremonies. To gain official recognition as a religious community, the community must also submit an official creed, the contents of which must not be "in conflict with public morals." SA's creed is highlighted in the first section in the letter to the County Governor: §1.
1.1
The power of creation expresses itself in all parts of life and human beings are interconnected with all living beings on a spiritual plane. Mother Earth is a living being and a particular responsibility rests on us for our fellow creatures and nature. All things living are an expression of the power of creation and therefore are our brothers and sisters.
A shamanistic faith means acknowledging that all things are animated and that they are our relatives. And that by using spiritual techniques, one can acquire knowledge through contacting the power of creation, natural forces and the spiritual world. A shamanistic faith involves a collective and individual responsibility for our fellow creatures, nature beings and Mother Earth. Mother Earth is regarded as a living being.
Shamanistic practice means the use of shamanistic techniques both for one's own development and for helping our fellow humans and other creatures. This means that creation is sacred and one celebrates the unfolding of the life force. 5 (my translation) The main emphases in the creed are the struggle to protect the environment, a holistic worldview, and Mother Earth as a key symbol for shamanistic practitioners. The shamans whom I interviewed at Isogaisa in 2014 also highlighted these themes as the core of their shamanic way of life. The symbolic values and ideals emphasized in this paragraph are not unique to Nordic shamanism but can be found in shamanic milieus across the globe (see Beyer 1998;Von Stuckrad 2002). From the very beginning, Mother Earth has been a central touchstone in shamanic practices. She is an essential figure to which one attributes power as well as offers sacrifices. A broad statement like this serves to encompass the diversity of practitioners of shamanism and excludes no one on the basis of national or ethnic identity.
Nevertheless, the call for a creed is clearly contrary to a religion that is non-dogmatic and it forces SA to conform to Christian values. Gram Franck emphasizes that it felt problematic to construct a creed, and that he was aware that this would lead to tensions. He still highlights that there was no way around it, if SA was to have a chance to be approved. He points out: anna) that was introduced in 1969 provides a framework for religious organizations in Norway. The Act ensures virtually equal treatment of the Church of Norway and other religious communities and denominations by facilitating that religious communities may apply for state and municipal subsidies per member. No country provides the same level of financial support for religious communities as Norway, and by this arrangement, the country represents an outer point where state and municipal grants form most of the resource base for the Norwegian Church, and where other religious communities receive similar, public subsidies per member (see Askeland 2011). 5 The text is taken from the letter to the County Governor; http://www.facebook.com/groups/291273094250547/files/#! /groups/291273094250547/doc/302374349807088/, accessed 29 January 2013 (my translation). GRAM FRANCK: That has been the heaviest obstacle (laughter), to understand what they were looking for. First, we got a nice disapproval, and an encouragement to resubmit. We spent a lot of time and I thought that if we do not make it this time, we need guidance. Then, we understood that is was a wording that had to be included for the document to be approved. We had to use the term "spouse," for example. It had to be literal and in the right order. This shows that there is a bureaucracy that interferes with the religious communities' design and what we can communicate that we believe in. Much of this builds on Christian principles, so I felt a bit reluctant to go into the process.
That this has been a challenging maneuver is something that also Lone Ebeltoft, the leader of the Shamanistic Association's local branch in the county of Tromsø, reflects on in one of our conversations: EBELTOFT: We had to work hard to find a formulation everyone could accept. Because it is important not to force anyone into anything, especially within shamanism because where the main goal is that everyone should be free. This is at the core of the critique against SA. We notice it when we are around at fairs and stuff and tell about the Shamanistic Association, then someone always says, "I'll never join anything like that, no one will force me into anything." They have simply not understood. We only wish to facilitate for shamanistic practices.
To have a creed that says something about the relationship with a god/gods and that does not conflict with "public morality" is mandatory. This requirement is still inconsistent for shamanic practitioners who see their worldview as fundamentally non-dogmatic and for whom organized religion is viewed as a threat to authentic spirituality. In other words, public registration challenges some of the key ideals within shamanism, namely individual religious freedom, anti-dogmatism, and anti-institutionalization. For some practitioners of shamanism, and particularly for shamanic entrepreneurs trying to make a living from shamanic healing, through, for example, drum making or other practices connected to shamanism, SA's entry into the shamanic arena in Norway thus appeared as a threat. They fear that SA will be introducing rules of conduct, religious leadership, and restrict religious freedom.
The association's key figures and leaders, Gram Franck and Ebeltoft, have been interviewed by local and national newspapers, radio and TV. TV2, one of Norway's largest national TV channels, covered the news about the initiation of a shamanistic association in Tromsø. The program emphasized that Ebeltoft welcomed the governor's decision and she expressed her ambition for preserving and continuing the shamanistic traditions and practices of the country. It was further highlighted that the Shamanistic Association's goal is to understand and respect nature. Nor is shamanism in any way mysterious. Shamanism is a world religion, and in the North, people are committed to preserving the Sámi and Norse (Arctic) traditions (TV2, 14 March 2012, italics by the author).
All relevant media stories have been characterized by a positive attitude toward the newborn religious association. The positive attention is in stark contrast to how the media in general has covered New Age events and entrepreneurs. According to Siv Ellen Kraft, the New Age does not hold a high position on the media's list of real religions and acceptable religiosity (Kraft 2011, p. 105). In the case of SA, we thus have media contributions that show a genuine interest in the phenomenon of contemporary shamanism. In the various reports, shamanism is not portrayed as a countercultural movement, characterized by oppositional attitudes and naïve as well as unreliable social actors, but rather as a world religion. Media is, as known, a key player concerning the development of the field of religion, both in terms of internal relationships related to power and authority and in view of highlighting certain issues and angles as particularly relevant. In Norway, shamanism, which started out as a Harner-style version of shamanism in the late 1980s and gradually developed into Norse and Sami localized variants of shamanism, has entered the field of world religions with the support of the media. It is currently viewed as a positive contribution and a necessary alternative, embodying important attitudes concerning contemporary environmental issues and materialistic lifestyles.
Conclusions
As James Beckford notes, "Disputes about what counts as religion, and attempts to devise new ways of controlling what is permitted under the label of religion have all increased" (Beckford 2003, p. 1). The Shamanistic Association (SA) appears to have been created for the purpose of meeting the criteria required for obtaining the rights of a Norwegian religious community. The national legal framework thus inspired a diverse group of professional entrepreneurs to join forces and organize themselves into a religious association.
Recently, the government has submitted a new law for consultation, "Proposed new law on religious communities". In short, the proposed new law implies that religious communities under 500 members will no longer receive financial support. Several smaller religious communities, such as the Shamanic Association, may disappear if they fail to increase the number of members. The Board of SA has come up with a strong rebuttal against the new law.
The concept of religion is, and has been, imbued with varying connotations and values in different societies and contexts. Why is it so important for the Shamanic Association to maintain status as a religion in view of the fact that the approval challenges some of the most important ideologies within shamanism in contemporary times? One reason can of course be the statutory benefits that the financial support constitutes and the right to perform religious ceremonies. Equally, the process is about gaining acceptance in the Norwegian society. The state approval of SA as a religion implies an acceptance for shamans in the present time, for their activities, attitudes, and beliefs and as such, is a means for SA to reach out to potential members and to gain attention about themselves and their message. The approval by the county governor also makes SA a representative for the Norwegian shamanic environment in the public space, although this does not necessarily reflect the situation within the environment. In other words, the approval of SA as a religion functions in relation to social interests and power relations among practitioners of shamanism and in relation to society. | 2018-12-30T05:22:10.586Z | 2018-07-23T00:00:00.000 | {
"year": 2018,
"sha1": "212ab48c60fd74a22ae49a0e824ea5e171dcc1bd",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3390/rel9070223",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "b47c13feb4456fcd2d1a460f062658d9d635c5c9",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Sociology"
]
} |
11043482 | pes2o/s2orc | v3-fos-license | Treatment adherence in patients with type 2 diabetes mellitus correlates with different coping styles, low perception of self-influence on disease, and depressive symptoms
Background Insulin analogs are regarded as more convenient to use than human insulin; however, they require a different administration scheme due to their unique pharmacokinetic and pharmacodynamic properties. This study aimed to assess difficulties with adherence to treatment with insulin analogs in patients with type 2 diabetes mellitus (T2DM), who had previously been treated with human insulin. The associations between difficulties with adherence and clinical, demographic, and psychological characteristics were also evaluated. Patients and methods The study was conducted on 3,467 consecutively enrolled patients with T2DM (54.4% women), mean age 63.9 years (SD =9.57), who had recently undergone a physician-directed change in treatment from human insulin to insulin analogs. The questionnaires addressed difficulties with switching the therapy, coping styles, well-being, and perception of self-influence on the disease. Results No adherence problems in switching therapy were reported in 56.6% of patients. Specific moderate difficulties were reported in 10.4%–22.1% of patients, major difficulties in 0.7%–6.9% of patients, and very significant difficulties in 0.03%–1.3% of patients. Overall, remembering to modify the insulin dose in the case of additional meals was the most frequently reported difficulty, and problems with identifying hypoglycemic symptoms were the least frequently reported. The increased risk of difficulties was moderately related to low perception of self-influence on diabetes and poor well-being. The intensity of problems was higher among those who were less-educated, lived in rural areas, had complications, and/or reported maladaptive coping styles. Conclusion Switching from human insulin to an insulin analog did not cause adherence problems in more than half of the patients. In the remaining patients, difficulties in adherence correlated with maladaptive coping styles, low perception of self-influence on disease course, and depressive symptoms.
Introduction
The World Health Organization (WHO) defines adherence as "the extent to which a person's behavior -taking medication, following a diet, and/or executing lifestyle changes -corresponds with agreed recommendations from a health care provider". 1 Patient adherence is a derivative of educational initiatives provided by medical staff, but the influence of a patient's psychosocial profile cannot be overlooked. 2 Indeed, the biopsychosocial model of glycemic control in diabetes includes relationships among stress, coping, and regimen adherence. 3 Furthermore, the model indicates that the coping style employed in response to diabetes depends on the perceived degree of control over the disease. Kokoszka 4 and Kokoszka et al , 5 introduced this concept of perception of self-influence on the disease and defined it as "the extent of belief about one's own abilities to shape the disease course". While the psychosocial problems and barriers related to diabetes mellitus management have been studied extensively, there is scarce information on adherence problems that occur when switching from human biphasic insulin to biphasic insulin analogs. This switch in therapy requires changes in the administration scheme due to significant differences in the pharmacodynamic and pharmacokinetic profiles between human insulin and insulin analogs. Little is also known about the factors influencing patient adherence during the change in therapy.
The aim of the present study was to assess difficulties with adherence to insulin analogs in patients with type 2 diabetes mellitus (T2DM) previously treated with human insulin. The associations between difficulties with adherence and clinical, demographic, and psychological data (including well-being, coping style, and perception of self-influence on the disease) were examined.
Patients and methods
In this observational study, 343 physicians from Poland enrolled consecutive patients with T2DM, who were switched from human biphasic insulin to an analog of biphasic insulin. Diabetes was diagnosed according to the Diabetes Poland guidelines published in 2010, ie, symptoms of hyperglycemia and random blood glucose concentration $200 mg/dL ($11.1 mmol/L) or fasting glucose $126 mg/dL ($7.0 mmol/L, based on two measurements on separate occasions) or blood glucose at 120 minutes during an oral glucose tolerance test (OGTT) $200 mg/dL ($11.1 mmol/L). 6 The patients were selected using the code for diabetes (E11) of the International Statistical Classification of Diseases and Related Health Problems (ICD-10). 7 The decision to change treatment was at the discretion of treating physicians; it was separated from patients' enrollment to the study and was based on the individual patient's clinical needs. The implemented therapy was treatment with either a biphasic insulin analog only (conventional insulin therapy) or a biphasic insulin analog in combination with a rapid-acting insulin analog (conventional intensified insulin therapy). The inclusion criteria were age .30 years, treatment with biphasic insulin, and change in therapy from human insulin to insulin analog 7-61 days prior to the visit during which the questionnaire was completed.
All patients completed the following assessments during their visit (completed by the physician and/or the patient): • Physician questionnaire -developed for the purpose of the study to collect patient demographic and auxological information, data on the history of diabetes, current metabolic control, and treatment with insulin analogs. Body mass index (BMI) was calculated based on height and weight data extracted from patient medical files. • Questionnaire on possible difficulties at the time of switch from human insulin to insulin analogs (Table S1)a set of five questions about the level of difficulty in implementing treatment with insulin analogs. These five specific problems were identified based on the patient reports provided by treating clinicians. Each of the possible responses (five-point Likert scale: no problem, insignificant problem, moderate problem, major problem, and very significant problem) corresponded to a score of 0-4. The final score was calculated as the sum of ratings for all answers. The scale showed high reliability (Cronbach's alpha =0.83), and all items correlated highly with the scale (from 0.6 to 0.7). The questionnaire also contained information on the frequency of adherence errors (having a snack between main meals, injecting a biphasic insulin analog much earlier before a meal, forgetting about changing the analog biphasic insulin dose after eating a snack, and experiencing hypoglycemia). The frequency of errors was reported since the patient's previous visit, using the five-point Likert scale (never, once, a few times, up to five times, and more than five times). For the purpose of the analysis, the data were dichotomized as one to five times and more than five times. • The coping styles were assessed using two questions related to health and social problems from the Brief Method of Evaluating Coping with Disease. 5,8 The full version of the questionnaire includes descriptions of four stressful situations (related to health, social, financial, and interpersonal problems); however, for the purpose of this study, only questions related to health and social problems were used. Based on the responses, the patient's coping style was determined as either task-oriented, best solution-oriented, emotion-oriented, avoidance-oriented, or a combination of the aforementioned styles. • WHO-5 well-being index -initially used for the diagnosis of depression in the general elderly population. 9 Recently, it has been validated as a screening tool for depression in patients with diabetes. 10 The questionnaire contains five positive statements about well-being, and patients for how long over the preceding 2 weeks he/she has been feeling this way. The score is a sum of individual responses, where 0 represents the worst and 25 the best well-being. Patients with a total score of ,13 points or those who answered 0 or 1 to any of the five items need further diagnosis. According to more recent guidelines, a total score of ,7 points suggests a high probability of depression. 11 This tool has adequate internal and external reliability. The questionnaire proved to be sufficiently homogenous (Loevinger's coefficient 0.47; Mokken's coefficient .0.3 in nearly all elements). 9 In the present study, the Polish version of the index, available on the WHO website, was used. 12 The index reliability evaluated on the basis of the trial turned out to be high (Cronbach's alpha coefficient 0.877). • Assessment of perception of self-influence on the disease course was performed by the physician using the Likert scale based on criteria for the assessment of the validity of the "Brief measure to assess perception of self-influence on the disease course. Version for diabetes". 4,5 The score ranged from 0 to 4, with 0 indicating the lowest and 4 the highest influence on the disease course.
ethical statement
The study was approved by the Bioethics Committee of the Medical University of Warsaw. All patients were provided with oral and written information about the study, before signing an informed consent form.
statistical analysis
The SPSS statistical package (version 17) was used for all data analyses. The normality of the distribution of variables was tested using two tests: the Kolmogorov-Smirnov test and the Shapiro-Wilk test. As the data were not normally distributed, nonparametric tests were applied. To compare differences in the severity of difficulties with adherence to the therapy among groups with differing levels of education, place of residence, or BMI, the chi-square test was used for nominal variables, while the Mann-Whitney U test and the Kruskal-Wallis H test were used for ordinal variables. The Kruskal-Wallis H test (including the chi-square test) is a rank-based nonparametric test that can be used to determine if there are statistically significant differences between two or more groups of an independent variable on a continuous or ordinal dependent variable. It is considered the nonparametric alternative to the one-way analysis of variance to allow the comparison of more than two independent groups.
To assess correlations between severity of the difficulties and variables such as age and BMI, the Spearman's rank correlation coefficient was calculated. The Spearman's rho correlation coefficient is the non-parametric equivalent of Pearson's r coefficient. A significance level of 0.05 was used in all tests. To minimize the probability of misclassification of data originating from patients with type 1 diabetes mellitus, only patients $30 years of age were included in the analysis dataset.
Results
In total, 4,041 sets of completed questionnaires were collected. Among the included patients, 91.2% started treatment with insulin analogs 7-61 days before inclusion into the study (mean =21.0; SD =9.23 days). After excluding patients younger than 30 years, 3,467 patients were included in the analysis.
characteristics of the studied group
The
590
Kokoszka for a meal following injection was not a problem or at most a moderate problem for 99% of patients. Less than 2% of patients reported difficulties with identifying hypoglycemic symptoms as major or very significant. The details are presented in Table 1.
The patient-reported difficulties in adherence to the therapy were reflected in the frequency of adherence errors defined in the Questionnaire on possible difficulties at the time of switch from human insulin to insulin analogs. The most common lack of adherence was snacking between meals, and the least common was hypoglycemia ( Table 2).
Severity of difficulties in adherence was positively correlated with age (rho =0.11; P,0.001), with the level of education (chi-square =92.73; df=2; P=0.0001), and with the place of residence (P=0.0001) but was not related to gender. With regard to education, patients who had completed a higher level of education reported fewer problems, and those with only a basic level of education reported the most severe difficulties. The more significant difficulties in adherence were seen in patients dwelling in rural areas than those who resided in big cities. Furthermore, small but statistically significant correlations were observed between the severity of difficulties and HbA1c (data not shown).
impact of BMi
The mean BMI in the study population was 29.7 kg/m 2 (SD =4.45). Differences in mean BMI between patient subgroups stratified by the degree of difficulty in skipping snacks between main meals were observed (P=0.0001; Figure 1), with greater problems reported by patients with a higher BMI than by those with lower BMI.
coping styles
In the studied population, the most common coping style was the mixed/undifferentiated style (26.9%), and the least common was an emotion-oriented style (9.1%). The greatest difficulties (mean rank of difficulties in adherence) were observed in patients who used an emotion-oriented coping strategy and the lowest in those who used an adaptive mixed coping style, specifically seeking the best solution and focused on task-oriented coping strategies (chi-square =159.87; P=0.000l). The detailed data are presented in Table 3.
Well-being and risk of depression
The mean WHO-5 scale score in the entire study population was 15.1 (SD =4.77). A result of ,13 points or an answer of 0 to 1 to any of the five items was noted for 913 patients (26.3%) and a result of ,7 points on the WHO-5 scale was recorded for 256 patients (7.4%). A negative correlation between the results of the WHO-5 scale and problems with adherence to therapy was observed (Spearman rho =−0.295; P,0.0001). A higher incidence of difficulties with adherence was reported by patients with low scores on the WHO-5 scale.
Perception of self-influence on the disease course
The mean score in the entire study population was 2.5 (SD =0.96). Severity of difficulties in adherence to the recommendations during treatment with insulin analogs was negatively correlated with the degree of the perception of self-influence on the course of the diabetes (Spearman rho =−0.295; P,0.01). A lower intensity of problems with adherence was observed in patients with a higher level of perception of self-influence on the course of the disease.
Discussion
In the present study, patient adherence following a physiciandirected switch from human insulin to an insulin analog was evaluated. Modification of therapy is usually postponed both by patients and physicians because it is associated with a sacrifice of time and the need for additional education; however, in the present study more than half of the patients had no problem adjusting to the new regimen. This ease of transition is probably due to the good safety profile and simple dosage scheme of insulin analogs, which make them a more effective and convenient therapy in comparison with human insulin. 3,[13][14][15] The results of the IMPROVE study indicate that intensifying the basal insulin (both human insulin and insulin analogs) regimens to the biphasic insulin regimen positively affects the outcomes of therapy. The authors observed improved glycemic control, reduced risk of
592
Kokoszka hypoglycemia, no significant change in weight, and increased patient satisfaction after such change in the therapy. 16 These treatment benefits may result from better pharmacokinetic and pharmacodynamic properties of the biphasic insulin analog and the fact that it could be dosed immediately before or after a meal. [16][17][18] In the present study, the most frequently reported, but not severe, difficulties were the need to adjust the dose of the biphasic insulin analog in the case of an extra meal and to forego snacking between main meals. Consequently, snacking between main meals and forgetting to adjust the insulin analog dose were the most frequent types of errors. We performed indirect analysis and assessment of the relationship between the level of difficulties in abstaining from the snacks and BMI. As expected, patients with higher BMI had greater problems with snacking between main meals than patients with normal weight. This finding may have clinical implications because higher basal BMI is a negative predictor of success in diabetes treatment, 19 and obesity is associated with worsened glycemic control in patients with T2DM treated with insulin. 20 Glycemic control influenced by adherence is dependent on many factors, including patient coping strategies. 3 Previous work has proven that positive coping styles (more approachoriented and those focused on dealing with the stressor itself) are associated with better glycemic control. 3,[21][22][23] Conversely, avoidant and emotional strategies (dealing with the emotional response to a stressor) are associated with adjustment problems and regimen nonadherence. 3,[21][22][23] Similar observations were made in our study -the most severe difficulties were reported by patients with an emotion-oriented coping strategy and the lowest in those using an adaptive mixed strategy where they applied the best solution and task-oriented coping strategies. According to the psychological "goodness of fit" hypothesis, the coping mechanism is related to the controllability of a stressor. 24 When dealing with a controllable agent, people are more likely to use a problem-focused coping strategy, 24-26 though a patient's selection of coping style also depends on the severity and duration of disease, with the impact of patient experiences, including emotional and cognitive factors, influencing the decision. 23,27 Nonetheless, any application of problem-solving strategies requires the perception of the possibility of gaining control over the stressful problem.
In turn, the perception of having control of the diabetes requires the perception of self-influence on the course of the disease. The concept of self-influence relates only to coping with the disease and so is consequently narrower than the perceived self-efficacy, which determines how a person feels, thinks, self-motivates, and behaves. 28 Hence, perception of self-influence is related to disease management and is therefore more precise. 5 In our study, a higher level of perception of self-influence on the course of the disease was related to a lower intensity of problems with adherence. Similarly, Sarkadi et al 29 observed that patients belonging to the "active" category of their self-perceived role in diabetes management have better outcomes compared to those having a "passive" attitude. Indeed, perceived control of diabetes was found to be a significant predictor of engagement in diabetes and desirable health behaviors that strongly influence adherence to the therapy. 30,31 The presence of diabetes almost doubles the risk of comorbid depression. In turn, it is well documented that depression and low well-being are related to nonadherence to treatment, [32][33][34] and patients with depression usually have poorer glycemic control. 35 As ∼30% of patients with diabetes have depressive symptoms and ∼12%-18% meet the criteria of major depression, 36,37 it is a serious problem. A similar relationship was observed in the present study: patients who received a low score on the WHO-5 scale, an indicator for depression, had worse adherence, as measured by mean higher incidences of difficulties with the therapy. Our results indicate that there is a need for screening for depression symptoms in patients with diabetes and that treating depression may enhance diabetes control by improving patient adherence.
study limitations
As the present study relies on patient-reported data, it is important to remember that the time elapsed from the implementation of insulin analog to the time of this study varied widely from patient to patient. For this reason, it is possible that patients who started insulin analog therapy earlier might not have recalled all the initial difficulties as easily as those who had started treatment more recently. Furthermore, a certain degree of selection bias may impact the results. The study population consisted of consecutive patients switching therapy and not consecutive patients on human insulin. Therefore, patients who accepted this new therapy were likely expecting to benefit from the switch and, therefore, showed strong adherence. The author also acknowledges that the psychological measures used in the present study do not have strong psychometric proprieties; however, they are useful, commonly used tools applied by the physician during a regular medical visit. In summary, these results should be interpreted with caution, keeping in mind the limitations discussed earlier and also those inherent to the nature of observational studies, such as recall bias, missing data, and sporadically observed lower data quality.
Conclusion
Overall, switching from human biphasic insulin to biphasic insulin analogs did not cause significant problems in adherence for the majority of patients. However, a subset of patients did report difficulties with treatment adherence, typically those who presented with emotion-oriented coping strategies, low perception of self-influence on disease, or depressive symptoms. It is therefore reasonable to suggest that the approach to treatment of diabetic patients should be biopsychosocial rather than simply biomedical.
Patient Preference and Adherence
Publish your work in this journal Submit your manuscript here: http://www.dovepress.com/patient-preference-and-adherence-journal Patient Preference and Adherence is an international, peer-reviewed, open access journal that focuses on the growing importance of patient preference and adherence throughout the therapeutic continuum. Patient satisfaction, acceptability, quality of life, compliance, persistence and their role in developing new therapeutic modalities and compounds to optimize clinical outcomes for existing disease states are major areas of interest for the journal. This journal has been accepted for indexing on PubMed Central. The manuscript management system is completely online and includes a very quick and fair peer-review system, which is all easy to use. Visit http://www. dovepress.com/testimonials.php to read real quotes from published authors.
Supplementary material
Table S1 Questionnaire on possible difficulties at the time of switch from human insulin to insulin analogs (completed by the physician and the patient) | 2018-04-03T01:28:20.688Z | 2017-03-17T00:00:00.000 | {
"year": 2017,
"sha1": "608f260bb34e6d226add8846441704fc9e363d8c",
"oa_license": "CCBYNC",
"oa_url": "https://www.dovepress.com/getfile.php?fileID=35548",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c20687c23a64b4df6d5403d2e4a16f00d9b2b2bc",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
247584089 | pes2o/s2orc | v3-fos-license | Quantitative 23Na‐MRI of the intervertebral disk at 3 T
Monitoring the tissue sodium content (TSC) in the intervertebral disk geometry noninvasively by MRI is a sensitive measure to estimate changes in the proteoglycan content of the intervertebral disk, which is a biomarker of degenerative disk disease (DDD) and of lumbar back pain (LBP). However, application of quantitative sodium concentration measurements in 23Na‐MRI is highly challenging due to the lower in vivo concentrations and smaller gyromagnetic ratio, ultimately yielding much smaller signal relative to 1H‐MRI. Moreover, imaging the intervertebral disk geometry imposes higher demands, mainly because the necessary RF volume coils produce highly inhomogeneous transmit field patterns. For an accurate absolute quantification of TSC in the intervertebral disks, the B1 field variations have to be mitigated. In this study, we report for the first time quantitative sodium concentration in the intervertebral disks at clinical field strengths (3 T) by deploying 23Na‐MRI in healthy human subjects. The sodium B1 maps were calculated by using the double‐angle method and a double‐tuned (1H/23Na) transceive chest coil, and the individual effects of the variation in the B1 field patterns in tissue sodium quantification were calculated. Phantom measurements were conducted to evaluate the quality of the Na‐weighted images and B1 mapping. Depending on the disk position, the sodium concentration was calculated as 161.6 mmol/L–347 mmol/L, and the mean sodium concentration of the intervertebral disks varies between 254.6 ± 54 mmol/L and 290.1 ± 39 mmol/L. A smoothing effect of the B1 correction on the sodium concentration maps was observed, such that the standard deviation of the mean sodium concentration was significantly reduced with B1 mitigation. The results of this work provide an improved integration of quantitative 23Na‐MRI into clinical studies in intervertebral disks such as degenerative disk disease and establish alternative scoring schemes to existing morphological scoring such as the Pfirrmann score.
Monitoring the tissue sodium content (TSC) in the intervertebral disk geometry noninvasively by MRI is a sensitive measure to estimate changes in the proteoglycan content of the intervertebral disk, which is a biomarker of degenerative disk disease (DDD) and of lumbar back pain (LBP). However, application of quantitative sodium concentration measurements in 23 Na-MRI is highly challenging due to the lower in vivo concentrations and smaller gyromagnetic ratio, ultimately yielding much smaller signal relative to 1 H-MRI. Moreover, imaging the intervertebral disk geometry imposes higher demands, mainly because the necessary RF volume coils produce highly inhomogeneous transmit field patterns. For an accurate absolute quantification of TSC in the intervertebral disks, the B 1 field variations have to be mitigated. In this study, we report for the first time quantitative sodium concentration in the intervertebral disks at clinical field strengths (3 T) by deploying 23 Na-MRI in healthy human subjects. The sodium B 1 maps were calculated by using the double-angle method and a double-tuned ( 1 H/ 23 Na) transceive chest coil, and the individual effects of the variation in the B 1 field patterns in tissue sodium quantification were calculated. Phantom measurements were conducted to evaluate the quality of the Na-weighted images and B 1 mapping. Depending on the disk position, the sodium concentration was calculated as 161.6 mmol/L-347 mmol/L, and the mean sodium concentration of the intervertebral disks varies between 254.6 ± 54 mmol/L and 290.1 ± 39 mmol/L. A smoothing effect of the B 1 correction on the sodium concentration maps was observed, such that the standard deviation of the mean sodium concentration was significantly reduced with B 1 mitigation. The results of this work provide an improved integration of quantitative 23 Na-MRI into clinical studies in intervertebral disks such as degenerative disk disease and establish alternative scoring schemes to existing morphological scoring such as the Pfirrmann score.
| INTRODUCTION
Back pain is a common medical symptom with massive socioeconomic implications due to effects on the patients' well-being, the inability to work, and the consequential high healthcare costs. 1 Conventional 1 H-MRI with morphological sequences is a routinely used imaging technique and provides a tool to rule out common causes such as disk extrusion, degenerative alterations of the facet joints or nerve root compression. 2 However, the symptoms are often unrelated to degenerative findings. 3,4 Noninvasive quantification of the local sodium content by MRI is a sensitive measure of tissue integrity and a valuable method to monitor the tissue viability and ion homeostasis in clinical research. 5 Both the technological advancements on the hardware side of MRI and the emergence of efficient data acquisition techniques (such as ultra-short echo-time-T E -sequences) have enabled the utilization of 23 Na-MRI as a potential tool to directly infer functional and structural information on the tissue, otherwise not achievable by conventional 1 H-MRI. [6][7][8] Previous studies have shown that the sodium content in the disks correlates with the proteoglycan (PG) content, which in turn seems to be a biomarker of degenerative disk disease (DDD) and of lumbar back pain (LBP). 9,10 Histologically, the intervertebral disk consists of an outer annulus fibrosus and an inner nucleus pulposus. Biochemically, these components are largely composed of collagen and PG. The negative fixed charge density attracts sodium ions, resulting in the concentration of sodium within the disk being directly proportional to the fixed charge density. 11 The pathologic and degenerative reduction of PG content and attraction of positively charged sodium ions causes a lower oncotic pressure and ultimately collagen degeneration. Therefore, 23 Na-MRI is a sensitive method to estimate the changes in the PG content of the intervertebral disk, similar to the articular cartilage. 12,13 Although 23 Na-MRI offers valuable supplementary information to 1 H-MRI, the practical implementation of tissue sodium quantification poses high demands on the measurement accuracy and precision. First, the signal-to-noise ratio of 23 Na-MRI is significantly lower than that of 1 H-MRI. This is mainly because the sodium concentration in the tissue is much lower (45 mM to 350 mM) relative to the water concentration and the gyromagnetic ratio is smaller (γ H =γ Na ≈ 3:7), ultimately yielding an 11 000 times smaller 23 Na signal relative to 1 H-MRI, under equivalent detection conditions for the two nuclei. 14 Second, as a spin-3/2 nucleus, 23 Na signal poses rapid biexponential transverse relaxation 15 and may also exhibit energy eigenstate shifting (or residual quadrupole splitting) as a result of the local electric field gradients due to the long-lived spatial orientation of the nuclear electric quadrupole moment. 16,17 Third, blurring results from the point spread function and from physiological motion. Fourth, strong partial volume effects because of the large voxel sizes lead to further inaccuracies. Finally, the signal modulation due the inhomogeneity in the B 1 field appears as a significant challenge for the accurate quantification. 18,19 The quantification of the tissue sodium content (TSC) in units of millimoles per liter potentially allows the inter-and intra-individual comparability necessary for patient stratification and for therapy monitoring. 20 However, quantitative analysis of sodium concentration from 23 Na-MRI requires accurate control of factors modulating the sodium signal. 21 Primarily, the accuracy and precision of the quantitative tissue sodium measurement are highly dependent on the spatial and temporal modulation of all the magnetic fields involved in the experimental realization. Significant error can be introduced due to B 1 inhomogeneity in TSC measurement. 18,19 A common approach to overcome this issue in 1 H-MRI is to use a relatively homogeneous transmit coil (such as a quadrature birdcage) by assuming that variations in flip angle are small in the imaging volume. 22 However, if body-volume coils or other highly inhomogeneous transmit coils must be used, this is not applicable and B 1 inhomogeneity has to be included in the quantification model. The specific geometry of the intervertebral disks and necessary antenna specs to image this geometry push the accuracy demand in quantification. Therefore, particularly for the intervertebral disks, B 1 field variations have to be mitigated by adequate correction methods to perform a reliable tissue sodium quantification.
The goal of this work is to provide quantitative TSC measurements using 23 Na-MRI in a clinical setting by taking into account the effects of spatial variations in the B 1 field along and across the intervertebral disk anatomy. To this end, 23 Na-MRI was performed to image the intervertebral disks at 3 T in healthy human subjects. The sodium B 1 maps were acquired with a double-tuned ( 1 H/ 23 Na) transceive chest coil and their individual effects in tissue sodium quantification were examined.
| Study participants
Written informed consent was obtained from six participants. The ethical board of the institution approved the study and all volunteers provided information prior to the examination. Due to macroscopic motion, data from one subject were rejected. Imaging with the same protocol was performed in the intervertebral disks of a total of five healthy volunteers (two males and three females) with an average age of 29.1 AE 4.2 years.
| Phantom measurements
To evaluate the distribution of B 1 field in a homogeneous medium, cylindrical phantom measurements were acquired. The phantom measurements were made using a volume cylindrical polyethylene container filled with 150 mmol/L NaCl. The same protocol and data processing as detailed below were applied. Additionally, five cylindrical phantom tubes filled with 50, 100, 150, 200 and 250 mmol/L NaCl with 5% agarose gel were used as a reference sodium signal intensity in experiments. The phantoms were located at the center of the volume coil and sodium concentration measurements were made with and without B 1 field correction to validate the applied 23 Na imaging protocol and quantification framework.
| 1 H-MRI
T 1 -and T 2 -weighted 1 H anatomical imaging protocol was performed to facilitate image segmentation. The same coil was used for excitation and signal reception for 1 H-MRI without repositioning the subject. A FLASH sequence was used to acquire the T 1 -weighted structural scan (T R / T E = 308/4.77 ms, 24 interleaves, FOV = 320 Â 320 mm 2 , voxel size = 1 Â 1 Â 1 mm 3 ).
| B 1 field mapping
The contrast variation of the 23 Na-MRI is fundamentally determined by the RF coils used for signal excitation and reception, and as expected the B 1 field patterns can drastically change based on the antenna involved. Sodium B 1 mapping was performed using the double-angle method, which encodes the flip angle into the amplitude of the complex MRI signal. 23 The B 1 field distribution is determined by the ratio of two images obtained at different nominal flip angles α 0 and 2α 0 . The field map α r ð Þ was calculated based on the relationship between the effective and nominal flip angles from two GRE volumes acquired with 45 and 90 flip angles: where I 45 and I 90 are the intensities of the corresponding voxels in images acquired with 45 and 90 flip angles. A long repetition time (T R = 100 ms) was used to minimize the T 1 dependence in B 1 mapping. Note that only the transceive coil was considered, and based on the principle of reciprocity both transmit and reception field amplitudes are denoted as B 1 in this study. The flip angle maps were calculated on a voxel-by-voxel basis across the imaging volume. The correction factor was calculated for every voxel by dividing the actual flip angle obtained from the flip angle map by the nominal flip angle, and the resulting correction factor was applied to mitigate the B 1 inhomogeneity on the sodium images.
| Sodium quantification
Sodium concentration quantification is usually done by placing reference phantoms with known sodium concentrations and relaxation times within the field of view of the images (i.e., the ranges of the reference tubes are usually 10 mM-150 mM to image brain or muscle, 100 mM-350 mM for cartilage). 20 In the volunteer measurements, five standardized reference sodium tubes were included in the field of view to allow quantification of the TSC. Moreover, internal tissue references such as cerebrospinal fluid with a well defined sodium concentration of 150 mM/L were used as a reference and for validation. 24 2.6 | Image processing and data analysis Subsequent data processing and analysis were carried out using self-developed scripts in MATLAB (MathWorks, Natick, MA, USA). TSC was calculated via an intensity calibration curve fitted to the signal of the reference tubes and the linear regression curve was used to extrapolate the sodium maps of the intervertebral disks. To minimize the influence of B 1 inhomogeneity on 23 Na-MRI, particularly in the specified geometry, all sodium-weighted images were subjected to voxel-wise B 1 -field correction. In vivo TSC values are presented as mean and standard deviation in millimoles per liter. To determine a region of interest (ROI) of intervertebral disks, T 1 -weighted 1 H images were used. Intensity thresholding was performed after plotting the border contours, resulting in a binary mask, where individual intervertebral disks were identified. The identified binary masks defining the intervertebral disks were controlled by an external radiologist. The B 1 inhomogeneity corrected TSC maps were calculated with in the identified intervertebral disks.
In vivo quantitative TSC values for each subject are presented as mean and standard deviation in millimoles per liter. A paired-sample t-test was performed to compare the mean sodium concentrations of different subjects for the uncorrected and B 1 corrected cases.
| RESULTS
The dual-tunable ( 1 H/ 23 Na) chest coil used for reception of 1 H, and for transmission and reception of 23 Na signals, is shown in Figure 1A. Figure 1B shows a simulation of transmit field efficiency of the coil in the transverse axis illustrating that, while the B 1 field has relatively homogeneous pattern in the close vicinity of the transmit elements, it decreases towards the center of the coil. Figure 1C shows a spin-density-weighted sagittal image of the intervertebral disk geometry used as a localizer. The intensity variations on the image as a result of field inhomogeneity are observable. The five reference tubes were placed in the field of view with varying sodium concentrations. Figure 1D shows the flip angle map for proton signal, exhibiting a relatively high SNR in the intervertebral disks but poor SNR between the disks along the spine. The nonuniformity of the transmit field for protons is also visible on the reference tubes. Figure 2 shows the effect of B 1 field in quantitative sodium imaging in a homogeneous phantom. Figure 2A shows the sodium concentrations of the reference tubes and the imaging phantom. Five reference tubes with 50 mmol/L, 100 mmol/L, 150 mmol/L, 200 mmol/L and 250 mmol/L NaCl with 5% agarose were placed above the imaging phantom with the sodium concentration of 150 mmol/L. Figure 2B shows the Na image of the phantoms with arbitrary units where the concentration gradient of the reference tubes from left to right is nicely depicted. Figure 2C shows the sodium flip angle map acquired with a double angle method (actual flip angle divided by the nominal flip angle). Even though the phantom and the reference tubes were placed at the center of the coil geometry and the imaging volume is much smaller compared with a human subject, there is up to 15% field nonuniformity across the phantom. The corresponding TSC maps with and without B 1 field correction are illustrated in Figure 2D and 2E respectively. The color bar refers to the actual sodium concentrations within the tubes and the phantom, demonstrating that the true concentrations were accurately reconstructed from the 23 Na-MR images. The effect of B 1 field correction is depicted in Figure 2F as the bar graphs of the B 1 field corrected and uncorrected concentrations within the tubes and the phantom. While the B 1 field exhibits relatively small variation within the imaging phantom, around 5%, the inhomogeneity increases along the reference tubes up to 15%, as implied by the flip angle map. Figure 3 shows the ROI placement using the T 1 -weighted 1 H images. Figure 3A shows the T 1 -weighted image, where the individual intervertebral disk structures are clearly visible. Figure 3B shows the intensity thresholding of the T 1 -weighted structural image within the determined contours along the spine and the binary mask extracted from the thresholded images that is used as the ultimate ROI to calculate the TSC within the intervertebral disks. The intervertebral disks from T9/T10 to L5/S1 are labelled on the image. Figure 3C shows the ROI for CSF identified on the structural image as the internal tissue reference and validation of the sodium concentration measurement. Figure 3D illustrates the quantitative TSC map, illustrating that the sodium concentrations in the CSF were measured as 148 ± 7 within the defined ROI and the sodium concentrations within the intervertebral disks were accurately quantified based on the references. Figure 4 presents the effect of B 1 field in quantitative sodium imaging in an exemplary subject. Figure 4A shows 23 Na images, where the sodium content of the intervertebral disks is clearly depicted in arbitrary units. The limited SNR provided by the utilized antenna prevents us imaging the whole spine structure at the same time. Figure 4B shows the in vivo sodium flip angle map depicted as the ratio of the actual to the nominal flip angle, which was deployed as the voxel-by-voxel correction factor. The magnetic field nonuniformity was measured up to 40% depicted by the flip angle map along and across the intervertebral disk geometry. Figure 4C and 4D shows the in vivo quantitative sodium concentration maps without and with B 1 field correction respectively. Even though a limited resolution was provided by the 23 Na-MRI, the anatomical structure of the intervertebral disks can be identified as annulus fibrosus and nucleus pulposus from the quantitative TSC maps. A further observation yielded by Figure 4D is that, even though the B 1 field variations were corrected, the quantitative TSC values are varying among the individual disks along the spine, which potentially signals physiological roots.
The mean sodium concentration values are listed in Table 1 and Table 2 Figure 5B shows the B 1 corrected TSC variation among the individual disks for different subjects. It is highly noticeable that the variation of TSC along the disks for different subjects provides similar patterns, yielding an increasing tendency from T9/T10 until L2/L3, where it peaks, and a decreasing pattern for the lower lumbar disks. Figure 5C shows the effect of B 1 field correction on the mean TSC of individual subjects (average of mean TSC for all disks for a single subject). The smoothing effect of B 1 correction by reducing the standard deviation in the average TSC for each subject is clear. Figure 5D shows the variation of normalized TSC (the ratio of TSC at each disk normalized by the mean TSC of all the disks for a single subject) for different subjects. As an analogy to coefficient of variation, the normalized TSC variation provides highly horizontal curves, implying a convergence to a constant ratio for every subject, varying between 0.6 and 1.3 for different disks.
| DISCUSSION
In principle, in order to ensure an accurate quantitative TSC measurement, the acquired 23 Na signal must be corrected for all contrast mechanisms apart from sodium concentration. The functional information extracted from the Na images can only be reliably applied if all signal modulations are well defined. The B 1 field at the sodium Larmor frequency emitted from a typical body-volume antenna poses significant magnetic field inhomogeneity along the spine, directly affecting the spin evolution and ultimately the 23 Na signal. In this study, we reported for the first time quantitative sodium concentrations in the intervertebral disks at clinical field strengths (3 T) by deploying 23 Na-MRI in healthy human subjects. This study is also the first to present sodium B 1 mapping in intervertebral disks and to assess the effect of sodium B 1 mapping to the quantitative TSC.
Dual resonant coils such as the one used in this work for sodium imaging are attractive mainly because they allow reference 1 H images to be acquired without repositioning the subject. In contrast to quadrature birdcage coils, which usually provide relatively homogeneous excitation field patterns at 3 T, the body-volume coils covering the intervertebral disk geometry exhibit highly inhomogeneous B 1 field profiles. Additionally There is a B 1 field nonuniformity across the phantom up to 15% even though the phantom and the reference tubes were placed at the center of the coil geometry and the imaging volume is much smaller compared with a human subject. D, Quantitative TSC maps without B 1 field correction. E, Quantitative TSC maps with B 1 field correction. The color bar refers to the actual sodium concentrations within the tubes and the phantom, demonstrating that the true concentrations were accurately reconstructed from the 23 Na-MR images. F, bar graphs of the B 1 field corrected and uncorrected concentrations within the tubes and the phantom. While the B 1 field exhibits relatively small variation within the imaging phantom, around 5%, the inhomogeneity increases along the reference tubes up to 15% as implied by the flip angle map mapping techniques that could potentially be utilized such as actual flip-angle imaging, 25 Bloch-Siegert shift 26 or the phase-sensitive method. 19 In comparison studies, it was previously reported that the Bloch-Siegert shift method and phase-sensitive method are the most accurate methods for 1 H. 27,28 For 23 Na-MRI, the phase-sensitive method is shown to yield higher quality B 1 maps at low signal-to-noise ratio and greater consistency of measurement than the double-angle method, but with higher vulnerability to large off-resonance shifts. Recently, a method for simultaneous B 1 mapping and imaging was proposed in order to enhance accuracy and to reduce measurement time with higher SNR compared with the double-angle method, which could potentially be applied. 18 To mitigate problems associated with low signal levels obtained during in vivo sodium imaging and sodium B 1 mapping experiments, the image resolution was limited to pixel sizes of a few millimeters (5 Â 5 Â 10 mm 3 ). However, such a voxel volume causes the accuracy of 23 Na-MRI to be strongly biased by partial volume effects (PVEs). Previously reported partial volume correction (PVC) methods for 23 Na-MRI for the brain tissue could be adapted for the spinal cord geometry. 29 Phantom measurements were conducted to evaluate the quality of the Na-weighted images and B 1 mapping. The experiments showed that the B 1 correction could significantly improve the quantitative accuracy of the sodium concentration maps besides smoothing the excitation profile, implied by the reduced standard deviation. Thus we performed B 1 correction for the in vivo data. In previous work concerning the 23 Na-MRI in the head using a transceive birdcage coil, B 1 correction had up to 10% difference in TSC for white matter, cerebrospinal fluid and vitreous humour. 18 The larger influence of the B 1 correction on the measured TSC values that was observed in our work is caused by the higher B 1 nonuniformity profile of the body-volume coil used. The sodium concentration values of the intervertebral disk of 161.6 mmol/L-347.0 mmol/L agree with previous studies using sodium MRS. [30][31][32] After the application of B 1 mitigation, the standard deviation of the mean sodium concentration was significantly reduced (p < 0.001) and the smoothing effect of the B 1 correction is obvious. 23 Na-MRI is a promising quantitative method that might be helpful to determine the biochemical status of the human intervertebral disk and better understand the pathophysiology of disk degeneration. 21 Previous studies have reported that a T 2 -weighted semiquantitative grading system and Pfirrmann classification can be used for the assessment of healthy and degenerative disks. 33,34 In the clinical routine, measurement of sodium representing the PG content of the disk might provide a noninvasive tool for assessment of disk degeneration at an early stage, giving the F I G U R E 4 Effect of B 1 field in quantitative sodium imaging in an exemplary subject. A, 23 Na images, where the sodium content of the intervertebral disks is clearly depicted in arbitrary units. The limited SNR provided by the utilized antenna prevents us imaging the whole spine structure at the same time. B, The in vivo sodium flip angle map depicted as the ratio of the actual to the nominal flip angle, which was further deployed as the voxel-by-voxel correction factor. The magnetic field nonuniformity was measured up to 40% depicted by the flip angle map along the intervertebral disk geometry. C, In vivo quantitative sodium concentration maps without B 1 field correction. D, In vivo quantitative sodium concentration maps with B 1 field correction. Even though a limited resolution was provided by the 23 This study has some limitations concerning the technical aspects. First of all, no B 0 correction was performed. Although the B 0 maps we obtained from the phantom measurements exhibit negligible off-resonance (below 3 Hz), susceptibility artifacts are highly prevalent along the spinal cord due to different susceptibilities in the intervertebral disks, and the lungs cause significant variations in the B 0 field. However, the B 0 shimming is highly challenging in the spinal cord geometry using the standard shim procedure mainly because not all inhomogeneity can be F I G U R E 5 Quantitative analysis of TSC within the individual disks and among different subjects. A, The variation of the average TSC values among the individual disks for a representative subject and the effect of B 1 field correction on it. B, B 1 corrected TSC variation among the individual disks for different subjects. It is highly noticeable that the variation of TSC along the disks for different subjects provides similar patterns, yielding an increasing tendency from T9/T10 until L2/L3 where it peaks and a decreasing pattern for the lower lumbar disks. C, Effect of B 1 field correction on the mean TSC of individual subjects (average of mean TSC for all disks for a single subject). D, Variation of normalized TSC (the ratio of TSC at each disk normalized by the mean TSC of all the disks for a single subject) for different subjects. As an analogy to coefficient of variation, the normalized TSC variation provides highly horizontal curves, implying a convergence to a constant ratio for every subject varying between 0.6 and 1.3 for different disks compensated with only first-and second-order shim coils. 35 Higher order shimming and slice-wise dynamic shimming are needed for sufficient compensation. 36 Moreover, the dynamic variation of the B 0 field due to the respiration cycle and moving dielectric tissue such as chest and abdomen can cause significant field fluctuations that require dynamic higher order shimming. 37 It has been previously reported that in vivo 23 Na nucleus exhibits a fast biexponential transversal relaxation time such that 60% of the signal decays with a T Ã 2 of 0.7-3 ms (fast component) and 40% of the signal decays with a T Ã 2 of 16-20 ms (slow component). 38 Consequently, it is important to acquire images at a short T E to avoid the signal loss from the fast-relaxing component. We acquired images at 1.98 ms, leading to a residual T Ã 2 weighting of the signal that can be further reduced by using ultra-short echo time acquisition schemes. 7,8 Regarding this study, limitations include the small number of participants as well as missing reproducibility assessment. Both tasks will be taken as future work, particularly to study the short-and mid-term variability of the sodium concentration measurement in the intervertebral disks, which is most interest for clinical studies to monitor variations due to underlying physiological dynamics.
| CONCLUSION
In conclusion, quantitative 23 Na-MRI is a promising tool to measure clinically relevant longitudinal changes in the intervertebral disks. Here, we reported quantitative sodium concentration measurement in the intervertebral disks at clinical field strengths (3 T). We showed that the B 1 nonuniformities associated with the antenna utilized for imaging significantly modulate the ultimate TSC and have to be mitigated for quantitative sodium concentration mapping. The results of this work have the potential to enable an improved integration of quantitative 23 Na-MRI into clinical studies in intervertebral disks such as degenerative disk disease and establish alternative scoring schemes to existing morphological scoring such as the Pfirrmann score. | 2022-03-22T06:22:41.104Z | 2022-03-20T00:00:00.000 | {
"year": 2022,
"sha1": "4a34ac0743408328d1dfb100b7666cf76bec9bbe",
"oa_license": null,
"oa_url": null,
"oa_status": "CLOSED",
"pdf_src": "Wiley",
"pdf_hash": "21bc7c1fcb503c4bea0763cb69efdf60dfca86d8",
"s2fieldsofstudy": [
"Medicine",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
} |
266273828 | pes2o/s2orc | v3-fos-license | Equity and use of telehealth modalities among people living with HIV during the COVID-19 pandemic
Background COVID-19 forced a rapid transition to telehealth. Little is known about the use of telephone versus video visits among people living with or at risk for HIV (PWH). Setting We studied electronic health record data from an urban HIV clinic. Our sample included visit- and person-level data. Visit-level data came from appointments scheduled from 30 March 2020 to 31 May 2020. Person-level data came from patients 18+ years of age who completed at least one telephone or video visit during the period of interest. Methods We performed a cross-sectional analysis. Our primary outcome was telehealth modality (telephone or video). We compared visit completion status by telehealth modality. We evaluated associations between patient characteristics and telehealth modality using logistic regression. Results In total, 1742 visits included information on telehealth modality: 1432 telephone (82%) and 310 (18%) video visits. 77% of telephone visits were completed compared to 75% of video visits (p = 0.449). The clinic recorded 643 completed telehealth visits in April and 623 in May 2020. The proportion of telephone visits decreased from 84% in April to 79% in May (p = 0.031). Most patients participated in telephone versus video visits (415 vs. 88 patients). Older age (adjusted odds ratio [AOR] 3.28; 95% confidence interval [CI], 1.37–7.82) and Black race (AOR 2.42; 95% CI, 1.20–4.49) were positively associated with telephone visits. Patient portal enrollment (AOR 0.06; 95% CI, 0.02–0.16) was negatively associated with telephone visits. Conclusion PWH used telephone more than video visits, suggesting that telephone visits are a vital healthcare resource for this population.
Introduction
On 11 March 2020, the World Health Organization announced the COVID-19 pandemic, prompting an immediate change in healthcare delivery. 1Prior to the pandemic, telehealth was expanding but infrequently used in the United States: only 10% of providers had ever used telehealth, and only 14% of all care at the Veterans Health Administration was provided using telehealth. 2,3The Yale telehealth team had initiated several successful telehealth operations for years prior to the pandemic. 46][7][8] This accelerated implementation of telehealth prompted concerns about disparities in telehealth access, especially as they impacted historically underserved populations.0][11] Trends in internet access showed slightly lower rates of internet adoption among older adults, Black and Hispanic populations, people with less formal education, and households with lower incomes. 12Trends in internet use specifically for health information echo these findings. 1314,15 In fact, research suggests that minoritized populations and patients with less formal education are less likely than white populations to be offered the patient portal access through which much of telehealth, especially video visits, is administered. 16here is a growing literature regarding the use of telehealth to provide care to people living with or at risk for HIV (PWH) and to those receiving pre-exposure prophylaxis (PrEP).8][19][20][21] Access to telehealth visits was vital for PWH during the early phase of the COVID-19 pandemic because consistent healthcare engagement helps patients maintain lower viral loads and higher CD4 counts. 22Furthermore, as these patients likely face psychosocial burdens including loneliness and stigma, it was imperative that they be able to receive continued multifaceted specialty care. 23It is thus important to identify disparities in telehealth access that impact this population.
The digital divide is an important limitation to PWH's access to telehealth.Adults with lower income or less formal education are less likely to own a smartphone or to have access to broadband. 20PWH are more likely than the general population to have lower socioeconomic status and to live in impoverished neighborhoods, and thus they likely experience reduced access to the technology and technological experience necessary to navigate video visits. 18,24elephone consultations may provide a solution to this access problem, but only a limited number of studies report on video versus telephone consultations. 25ur goal was to assess rates of telehealth modality (telephone or video) use in a clinic serving PWH.We also explored patient-level factors associated with telehealth modality use in this population during the early phase of COVID-19.
Methods
We performed a cross-sectional analysis using electronic health record data from an urban HIV clinic housed within a large academic health system in Connecticut.This clinic is the largest provider of comprehensive services to adults living with HIV within the state and provides multidisciplinary primary care for approximately 800 adults living with HIV and 200 adults at risk who take PrEP.The clinical team includes 16 part-time HIV specialist physicians, 2 advanced practice providers, 9 trainees (3 infectious disease fellows and 6 residents in the HIV primary care pathway), 3 HIV consultants (e.g., psychiatrist, neurologist, transplant physician), 1 pharmacist, 2 social workers, 2 nurses, 2 medical assistants, 1 medical case manager, and 1 front desk staff member.The clinic has a call center that assists in patient call triage and scheduling.Clinicians provide care through approximately 7500 medical visits per year.During the early months of the pandemic, the clinic team also consisted of students and other volunteers who helped patients enroll in and orient to the clinic's telehealth system.Nurses, front desk, and other staff assisted providers and patients with telehealth issues before and during visits as they were able.The healthcare system also held training sessions to teach staff how to best utilize video and telephone visits and to troubleshoot during visits.
We followed the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) reporting guidelines for cross-sectional studies.
Sample
We analyzed both visit-and person-level data.Visit-level data came from appointments scheduled at the clinic from 30 March 2020 to 31 May 2020 (N = 1742).We limited our research to the first 2 months of the pandemic to better understand patient choice regarding visit modality (video vs. telephone) when telehealth visits were the only option.We excluded visits for which the telehealth modality was unknown.
Person-level data came from adult patients 18+ years of age who completed at least one telephone or video visit during the period of interest (N = 503).We excluded patients who attended both telephone and video visits as there were very few patients in this category (N = 18).We also excluded patients who had only no-show, cancelled, or left without being seen for visits during this period.
Study variables
Our outcome of interest was telehealth modality: "telephone" for visits using audio methods only and "video" if using an audiovisual platform.Visit modality was recorded by front desk staff at the time of scheduling.Any visits missing information on type of telehealth visit (telephone or video) were excluded from the analyses.At the visit level, we categorized visits by visit completion status: completed, canceled, no-show, and left without being seen. 10We then categorized visits by the month in which they were completed (April or May) to explore changes in telehealth modality use over time.For this analysis only, we excluded those visits that occurred in March as there were only 2 days in this month that were involved in the telehealth transition.
For the patient-level analyses, independent variables included self-reported age, sex, race, ethnicity, preferred language, need for an interpreter, and patient portal enrollment.All have been shown to be associated with telehealth use and are pertinent to questions of health equity. 9,14,15ge was categorized as young (18-44 years old), middle-aged (45-64), and older (65+). 14Race was categorized as White, Black, or other race.Ethnicity was categorized as Hispanic, non-Hispanic, and other.Language categories included English, Spanish, and other.
Statistical analysis
For the visit-level analyses, we generated descriptive statistics using chi-square and Fisher's exact tests as appropriate to compare visit completion status by telehealth modality (telephone or video) and to compare telehealth modality by visit month.
For the patient-level analyses, we reported means and standard deviations for continuous variables and frequencies for categorical variables.We used t-tests to compare continuous variables and chi-square or Fisher's exact tests as appropriate to compare categorical variables.We then conducted multivariable logistic regression to identify factors associated with telephone visit use, including factors such as age, sex, race, ethnicity, preferred language, need for an interpreter, and patient portal enrollment. 9,14,15e presented adjusted odds ratios (AOR) with 95% confidence intervals (CI) and corresponding p-values.Statistical significance was set at p < 0.05.Analyses were performed using SAS 9.4 (SAS Institute).
Ethics
Our study was deemed to be exempt by the Institutional Review Board of Yale University School of Medicine (Protocol ID 2000028059), thus consent was not required.
Visit-level analyses
A total of 2668 visits were scheduled at the clinic between 30 March 2020 and 31 May 2020.Of these, 926 were missing information on telehealth visit type (telephone vs. video) and were therefore excluded from the study.Of the 1742 remaining visits, 1432 telephone visits (82%) and 310 (18%) video visits were scheduled.Seventy-seven percent of telephone visits were completed compared to 75% of video visits (p = 0.449).No-show rates were significantly higher for telephone (8%) versus video visits (5%) (p = 0.025).Cancelation rates were significantly higher for video (20%) relative to telephone (15%) visits (p = 0.019).
The clinic recorded 643 completed telehealth visits in April and 623 in May 2020.The proportion of telephone visits decreased from 84% of all telehealth visits in April to 79% in May (p = 0.031).
Patient-level analyses
At the patient level, 503 unique patients completed a telehealth visit during the period of interest.Of these, 415 (83%) completed at least one telephone visit, and 88 (17%) completed at least one video visit (Table 1).Eighteen patients completed both telephone and video visits during the time of interest, and we excluded them from our study.These excluded patients differed from those we included in terms of patient portal enrollment (89% enrollment for patients who attended both modalities, 54% for those who attended only one modality, p = 0.003).Otherwise, there were no significant differences between the groups.
Telephone was the most used telehealth modality regardless of age, race, preferred language, need for an interpreter, and enrollment in the patient portal.Those who had completed telephone visits were older than those who had video visits (54 ± 14 years vs. 47 ± 13 years, respectively, p < 0.001).They were more likely to be Black (45% vs. 24%, p < 0.001) or other race (21% vs. 11%, p < 0.001) and to require an interpreter (7% vs. 1%, p = 0.027).Patients attending telephone visits were less likely to be enrolled in the patient portal than those who used video visits (45% vs. 94%, p < 0.001).
In multivariable analyses, age, race, and patient portal enrollment were significantly associated with use of a telephone visit (Table 2).Older patients (AOR 3.28; 95% CI, 1.37-7.82)and those of Black race (AOR 2.42; 95% CI, 1.20-4.49)had higher odds of telephone visits compared to patients who were under 45 years of age and those of White race, respectively.Patients enrolled in the patient
Discussion
In this study of telehealth modality use at an urban HIV clinic from March through May 2020, we found that telephone visits were far more common than video visits.There was a lower frequency of cancellations and a higher frequency of no-shows among telephone visits.Telephone visit use decreased from April to May but remained the predominant telehealth modality used.For patient-level analyses, we found that older age, Black race, and lack of patient portal enrollment were associated with use of telephone visits.
14,15 The older population faces unique barriers to video visits including hearing and visual impairment, and limited technological experience. 26Black patients are less likely than white patients to have access to broadband internet and may be less likely to use personal health technology to manage their healthcare. 12,13,27Given that older age and Black race are associated with a greater likelihood of telephone use, failure to provide telephone visits could result in the exclusion of vulnerable patients from care.There is also evidence that telephone visits facilitate access to care; the lower cancellation rate among telephone visits compared to video visits within our study may suggest that telephone visits are an accepted visit modality in the PWH population. 28For these reasons, and because cost is a common barrier to care for PWH, it is vitally important that telephone visits be covered by insurance and that they be covered at rates commensurate with other telehealth modalities. 29hile measures to expand broadband networks or telehealth infrastructure have been implemented in recent years, our research suggests that they are insufficient for achieving video visit use. 30,31Many patients who used telephone visits were enrolled in the patient portal, suggesting that they had access to the technology necessary for video visits but still preferred telephone to video visits.Furthermore, telephone remained the main visit modality through April and May.It may be that while patients could be signed up for the portal, they lacked the knowledge and/or internet access required to use video technology.Alternatively, research suggests that clinic and provider factors, such as video infrastructure or biased assumptions about which patients can attend video visits, may be more important than patient preference in determining telehealth modality. 14Clearly, more research is necessary to explore reasons for selecting specific telehealth modalities.However, reimbursement parity for telephone visits and technological education for both patients and providers must be part of the solution to optimize telehealth.As telehealth becomes a more integral part of healthcare, multiple modalities should be available so that providers do not discriminate against those who are unable to or who prefer not to use video technologies.
More research-both qualitative and quantitative-is required to further explore telehealth modality use among PWH.In qualitative studies, researchers have shown that both PWH and their healthcare providers have favorable attitudes toward using telehealth. 20,32Future studies should investigate why patients use telephone rather than video visits even when they have access to the necessary technology.Quantitatively, researchers have begun to study the impact of telehealth on CD4 counts and viral load, but no studies have looked at the effects of telephone versus video visits on such outcomes. 33More evidence is required to determine the effectiveness of telephone or video visits in improving patient outcomes and the best use of each modality for various aspects of the chronic disease cascade.Furthermore, as telehealth becomes increasingly ubiquitous, studies should investigate trends in long-term modality use to best inform care for this population.Existing research suggests outcomes similar to those in our study, but more data are needed to ensure this at-risk population receives optimal care. 34ur study had several strengths.We were able to include a larger number of patients (N = 503) and telehealth visits (N = 1742) than had been included in other, comparable studies of PWH. 18In addition, clinic providers included PAs, APRNs, RNs, MDs, and social workers, making our findings applicable to multiple healthcare disciplines.Furthermore, our study period included only telehealth visits, eliminating possible complications from co-occurring in-person visits.
Our study also had notable limitations.First, 35% of scheduled visits during this period were of unknown telehealth modality.This was likely a product of the rapid telehealth transition faced by the clinic: providers and staff were initially unsure of how to schedule and document telehealth visits.Second, the number of telephone visits was likely underestimated, as some visits scheduled as video visits may have been switched to telephone at the time of the appointment.Third, we were unable to assess the impact of factors such as insurance coverage, income, or internet access, which prior research suggests are associated with telehealth use.We were also unable to assess the impact of differences in insurance reimbursement for video versus telephone visits.These factors should be investigated in future studies.Fourth, we could not elucidate what barriers our patients may have faced in attending a video visit, including improper technology, lack of broadband access, insufficient technological knowledge, or personal preference.It is also unclear if clinic factors (e.g., facility internet, provider technological skill or preference, availability of timely technical assistance, differences in reimbursement, and healthcare system preference) influenced visit modality.Finally, we studied a single urban clinic in a large academic institution with extensive resources available to help clinics make the transition from in-person to telehealth care.Our results may not be generalizable to non-academic or rural settings.
Conclusions
The COVID-19 pandemic forced rapid change in clinical practice.Telehealth expansion provided a chance for patients and providers to safely connect amidst an uncertain public health crisis.It also provided researchers a unique opportunity to study this previously limited practice modality, bringing to light inequities and repercussions which might otherwise have remained hidden.Our study found that PWH used telephone visits more than video visits to engage in care.We also found that differences in age and race were associated with telephone visit attendance, which is consistent with the results of similar studies from different populations.As the use of telehealth continues, we must continue to seek opportunities to provide more efficient, safe, and accessible care.
Table 1 .
Patient characteristics by telehealth modality.
Table 2 .
Adjusted associations between patient characteristics and telephone visit use. | 2023-12-16T16:06:27.487Z | 2023-01-01T00:00:00.000 | {
"year": 2023,
"sha1": "12a2946b2fdd9cf75ff21d565307bd682e5f3ddb",
"oa_license": "CCBYNC",
"oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/20552076231218840",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ce8c4b9ff421c80d338c90f2b2ad03e1fce4b7e8",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.